Test Report: Docker_Linux_crio 21924

                    
                      af8f7912417d9ebc8a76a18bcb87417cd1a63b57:2025-11-19:42387
                    
                

Test fail (37/328)

Order failed test Duration
29 TestAddons/serial/Volcano 0.25
35 TestAddons/parallel/Registry 12.55
36 TestAddons/parallel/RegistryCreds 0.43
37 TestAddons/parallel/Ingress 148.91
38 TestAddons/parallel/InspektorGadget 5.25
39 TestAddons/parallel/MetricsServer 5.3
41 TestAddons/parallel/CSI 41.47
42 TestAddons/parallel/Headlamp 2.36
43 TestAddons/parallel/CloudSpanner 5.26
44 TestAddons/parallel/LocalPath 8.1
45 TestAddons/parallel/NvidiaDevicePlugin 5.24
46 TestAddons/parallel/Yakd 5.27
47 TestAddons/parallel/AmdGpuDevicePlugin 6.24
97 TestFunctional/parallel/ServiceCmdConnect 602.76
121 TestFunctional/parallel/ServiceCmd/DeployApp 600.57
134 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.85
135 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.84
136 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.22
137 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.28
139 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.18
140 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.32
152 TestFunctional/parallel/ServiceCmd/HTTPS 0.52
153 TestFunctional/parallel/ServiceCmd/Format 0.52
154 TestFunctional/parallel/ServiceCmd/URL 0.52
191 TestJSONOutput/pause/Command 1.9
197 TestJSONOutput/unpause/Command 1.61
261 TestPause/serial/Pause 7.85
348 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.11
352 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.11
355 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.07
358 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.05
370 TestStartStop/group/old-k8s-version/serial/Pause 5.91
377 TestStartStop/group/embed-certs/serial/Pause 5.65
381 TestStartStop/group/default-k8s-diff-port/serial/Pause 5.56
384 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.08
387 TestStartStop/group/no-preload/serial/Pause 5.71
393 TestStartStop/group/newest-cni/serial/Pause 5.62
x
+
TestAddons/serial/Volcano (0.25s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-167289 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-167289 addons disable volcano --alsologtostderr -v=1: exit status 11 (245.841081ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 01:58:27.835605   23911 out.go:360] Setting OutFile to fd 1 ...
	I1119 01:58:27.835902   23911 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 01:58:27.835912   23911 out.go:374] Setting ErrFile to fd 2...
	I1119 01:58:27.835917   23911 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 01:58:27.836085   23911 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-11126/.minikube/bin
	I1119 01:58:27.836318   23911 mustload.go:66] Loading cluster: addons-167289
	I1119 01:58:27.836648   23911 config.go:182] Loaded profile config "addons-167289": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 01:58:27.836662   23911 addons.go:607] checking whether the cluster is paused
	I1119 01:58:27.836740   23911 config.go:182] Loaded profile config "addons-167289": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 01:58:27.836751   23911 host.go:66] Checking if "addons-167289" exists ...
	I1119 01:58:27.837105   23911 cli_runner.go:164] Run: docker container inspect addons-167289 --format={{.State.Status}}
	I1119 01:58:27.856640   23911 ssh_runner.go:195] Run: systemctl --version
	I1119 01:58:27.856684   23911 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-167289
	I1119 01:58:27.876571   23911 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/addons-167289/id_rsa Username:docker}
	I1119 01:58:27.969223   23911 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 01:58:27.969302   23911 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 01:58:27.998022   23911 cri.go:89] found id: "2b3c875b37c34a1aaf8e79a105e8e53fae42de67a4a4a839e6099e2e76e3ee93"
	I1119 01:58:27.998040   23911 cri.go:89] found id: "0057cb6b6d59c2d741aceb29df5771b62c0e82207a84f69ac5154387cbd84153"
	I1119 01:58:27.998054   23911 cri.go:89] found id: "387526d34b521aa97915fbe3e7854312807b05167ee255ed3d4dfbf358eb18ab"
	I1119 01:58:27.998057   23911 cri.go:89] found id: "f44d066a2880c3b89fb901e65c68edf6462e6f5ee4704d445d70bab540e140db"
	I1119 01:58:27.998060   23911 cri.go:89] found id: "d46baa577b02a9113e070c4a0480941b3b25bbbcce455137088c83a4b640d69f"
	I1119 01:58:27.998064   23911 cri.go:89] found id: "3e7307111a0a7ff2319df0e4a44e2dfdd6899963934cd8f81e97fe79104558fe"
	I1119 01:58:27.998068   23911 cri.go:89] found id: "e4525045db437311150f979f145e5df2b15dba4a85832f3b40b56d9e95456c85"
	I1119 01:58:27.998073   23911 cri.go:89] found id: "320316320c36a31575ed518280c787f454599b6f6db11a50abd8a2b071eab8ce"
	I1119 01:58:27.998077   23911 cri.go:89] found id: "77230f6072332b89f67e0a13fc3e2f90a73b685df581bca576a4aa98a0393837"
	I1119 01:58:27.998084   23911 cri.go:89] found id: "4c4521da22d2eb06ed45356e3e80a96ea0146646cd996eb249b4381da1a14456"
	I1119 01:58:27.998089   23911 cri.go:89] found id: "6c5d7a569a83aee258230f3e4101efcec68212fb81bd79541a6db05f42d1a635"
	I1119 01:58:27.998092   23911 cri.go:89] found id: "c45598982d3b30077574919aa2f884686b6cc7cef2866a9077b7aaa5b63ec66f"
	I1119 01:58:27.998097   23911 cri.go:89] found id: "fc07b5bfc14386b4ffa6dbdfb46e833fb2891243713de31478929edea09648dc"
	I1119 01:58:27.998101   23911 cri.go:89] found id: "ee1592f353982b5c192b5c5fede23bebda0067235ac78605adf1748bd5b7a544"
	I1119 01:58:27.998105   23911 cri.go:89] found id: "139e05f21703a92685b5f507816a8e38f914726f6ef0aa1b6cace7a7821c19fa"
	I1119 01:58:27.998110   23911 cri.go:89] found id: "28a0d1d0eb9de3e99faff9a60e034a22f2550e1d457b6e8c119f0069bb8c2dfb"
	I1119 01:58:27.998113   23911 cri.go:89] found id: "2d72765f224ed9ae16aaaf83613ea7845ee7fcd72d8bd4046856b1ae4dcbe2f6"
	I1119 01:58:27.998117   23911 cri.go:89] found id: "4f2a8fdefa3a921e57d39ce31bcba0663aef48e6cc812fddf8202cddb72408bc"
	I1119 01:58:27.998119   23911 cri.go:89] found id: "fc26589821a5be523a8e87f633779e2f88a1591ee75b009035709790a1af3b55"
	I1119 01:58:27.998122   23911 cri.go:89] found id: "76265018a97b054599e84272f03b9d9c1514d776b5f51869b14493d4baae8651"
	I1119 01:58:27.998124   23911 cri.go:89] found id: "2c19d6084be53e03b3e3ac1e879db7b855bd9d15968238b086b01debd7d271ef"
	I1119 01:58:27.998127   23911 cri.go:89] found id: "caf07801af8b3aabb5593451ab4fa658a97ce71604a86b5c1c38b3f9bde7e597"
	I1119 01:58:27.998129   23911 cri.go:89] found id: "32f9d499f63a2a4e0bec492e9ab5fd077b296adbf502d37c0814b310de798245"
	I1119 01:58:27.998133   23911 cri.go:89] found id: "c9553756abec4455cfb96c8ccb4ac24ebed143c24ffb9c08df7706240e482bac"
	I1119 01:58:27.998135   23911 cri.go:89] found id: ""
	I1119 01:58:27.998171   23911 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 01:58:28.011222   23911 out.go:203] 
	W1119 01:58:28.012382   23911 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T01:58:28Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T01:58:28Z" level=error msg="open /run/runc: no such file or directory"
	
	W1119 01:58:28.012401   23911 out.go:285] * 
	* 
	W1119 01:58:28.015589   23911 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1119 01:58:28.016810   23911 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-amd64 -p addons-167289 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.25s)

                                                
                                    
x
+
TestAddons/parallel/Registry (12.55s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 2.977145ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-fvk8h" [c2e887e7-9fa8-44be-baf1-e7067f024b2b] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.002273333s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-7s98h" [c3c89bae-0110-4220-bc88-c3dfb3496f53] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.00307208s
addons_test.go:392: (dbg) Run:  kubectl --context addons-167289 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-167289 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-167289 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (2.118784668s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-167289 ip
2025/11/19 01:58:49 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-167289 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-167289 addons disable registry --alsologtostderr -v=1: exit status 11 (231.688621ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 01:58:49.149699   26480 out.go:360] Setting OutFile to fd 1 ...
	I1119 01:58:49.149870   26480 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 01:58:49.149881   26480 out.go:374] Setting ErrFile to fd 2...
	I1119 01:58:49.149888   26480 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 01:58:49.150066   26480 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-11126/.minikube/bin
	I1119 01:58:49.150337   26480 mustload.go:66] Loading cluster: addons-167289
	I1119 01:58:49.150698   26480 config.go:182] Loaded profile config "addons-167289": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 01:58:49.150715   26480 addons.go:607] checking whether the cluster is paused
	I1119 01:58:49.150826   26480 config.go:182] Loaded profile config "addons-167289": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 01:58:49.150845   26480 host.go:66] Checking if "addons-167289" exists ...
	I1119 01:58:49.151312   26480 cli_runner.go:164] Run: docker container inspect addons-167289 --format={{.State.Status}}
	I1119 01:58:49.170508   26480 ssh_runner.go:195] Run: systemctl --version
	I1119 01:58:49.170553   26480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-167289
	I1119 01:58:49.186708   26480 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/addons-167289/id_rsa Username:docker}
	I1119 01:58:49.278470   26480 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 01:58:49.278552   26480 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 01:58:49.306656   26480 cri.go:89] found id: "2b3c875b37c34a1aaf8e79a105e8e53fae42de67a4a4a839e6099e2e76e3ee93"
	I1119 01:58:49.306678   26480 cri.go:89] found id: "0057cb6b6d59c2d741aceb29df5771b62c0e82207a84f69ac5154387cbd84153"
	I1119 01:58:49.306682   26480 cri.go:89] found id: "387526d34b521aa97915fbe3e7854312807b05167ee255ed3d4dfbf358eb18ab"
	I1119 01:58:49.306685   26480 cri.go:89] found id: "f44d066a2880c3b89fb901e65c68edf6462e6f5ee4704d445d70bab540e140db"
	I1119 01:58:49.306688   26480 cri.go:89] found id: "d46baa577b02a9113e070c4a0480941b3b25bbbcce455137088c83a4b640d69f"
	I1119 01:58:49.306692   26480 cri.go:89] found id: "3e7307111a0a7ff2319df0e4a44e2dfdd6899963934cd8f81e97fe79104558fe"
	I1119 01:58:49.306696   26480 cri.go:89] found id: "e4525045db437311150f979f145e5df2b15dba4a85832f3b40b56d9e95456c85"
	I1119 01:58:49.306700   26480 cri.go:89] found id: "320316320c36a31575ed518280c787f454599b6f6db11a50abd8a2b071eab8ce"
	I1119 01:58:49.306704   26480 cri.go:89] found id: "77230f6072332b89f67e0a13fc3e2f90a73b685df581bca576a4aa98a0393837"
	I1119 01:58:49.306718   26480 cri.go:89] found id: "4c4521da22d2eb06ed45356e3e80a96ea0146646cd996eb249b4381da1a14456"
	I1119 01:58:49.306722   26480 cri.go:89] found id: "6c5d7a569a83aee258230f3e4101efcec68212fb81bd79541a6db05f42d1a635"
	I1119 01:58:49.306727   26480 cri.go:89] found id: "c45598982d3b30077574919aa2f884686b6cc7cef2866a9077b7aaa5b63ec66f"
	I1119 01:58:49.306731   26480 cri.go:89] found id: "fc07b5bfc14386b4ffa6dbdfb46e833fb2891243713de31478929edea09648dc"
	I1119 01:58:49.306736   26480 cri.go:89] found id: "ee1592f353982b5c192b5c5fede23bebda0067235ac78605adf1748bd5b7a544"
	I1119 01:58:49.306740   26480 cri.go:89] found id: "139e05f21703a92685b5f507816a8e38f914726f6ef0aa1b6cace7a7821c19fa"
	I1119 01:58:49.306748   26480 cri.go:89] found id: "28a0d1d0eb9de3e99faff9a60e034a22f2550e1d457b6e8c119f0069bb8c2dfb"
	I1119 01:58:49.306756   26480 cri.go:89] found id: "2d72765f224ed9ae16aaaf83613ea7845ee7fcd72d8bd4046856b1ae4dcbe2f6"
	I1119 01:58:49.306762   26480 cri.go:89] found id: "4f2a8fdefa3a921e57d39ce31bcba0663aef48e6cc812fddf8202cddb72408bc"
	I1119 01:58:49.306766   26480 cri.go:89] found id: "fc26589821a5be523a8e87f633779e2f88a1591ee75b009035709790a1af3b55"
	I1119 01:58:49.306770   26480 cri.go:89] found id: "76265018a97b054599e84272f03b9d9c1514d776b5f51869b14493d4baae8651"
	I1119 01:58:49.306779   26480 cri.go:89] found id: "2c19d6084be53e03b3e3ac1e879db7b855bd9d15968238b086b01debd7d271ef"
	I1119 01:58:49.306782   26480 cri.go:89] found id: "caf07801af8b3aabb5593451ab4fa658a97ce71604a86b5c1c38b3f9bde7e597"
	I1119 01:58:49.306786   26480 cri.go:89] found id: "32f9d499f63a2a4e0bec492e9ab5fd077b296adbf502d37c0814b310de798245"
	I1119 01:58:49.306793   26480 cri.go:89] found id: "c9553756abec4455cfb96c8ccb4ac24ebed143c24ffb9c08df7706240e482bac"
	I1119 01:58:49.306805   26480 cri.go:89] found id: ""
	I1119 01:58:49.306849   26480 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 01:58:49.320078   26480 out.go:203] 
	W1119 01:58:49.321337   26480 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T01:58:49Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T01:58:49Z" level=error msg="open /run/runc: no such file or directory"
	
	W1119 01:58:49.321352   26480 out.go:285] * 
	* 
	W1119 01:58:49.324311   26480 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1119 01:58:49.325607   26480 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-amd64 -p addons-167289 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (12.55s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.43s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 2.711644ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-167289
addons_test.go:332: (dbg) Run:  kubectl --context addons-167289 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-167289 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-167289 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (275.939093ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 01:58:42.305515   25248 out.go:360] Setting OutFile to fd 1 ...
	I1119 01:58:42.305803   25248 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 01:58:42.305813   25248 out.go:374] Setting ErrFile to fd 2...
	I1119 01:58:42.305818   25248 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 01:58:42.306001   25248 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-11126/.minikube/bin
	I1119 01:58:42.306282   25248 mustload.go:66] Loading cluster: addons-167289
	I1119 01:58:42.306644   25248 config.go:182] Loaded profile config "addons-167289": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 01:58:42.306661   25248 addons.go:607] checking whether the cluster is paused
	I1119 01:58:42.306741   25248 config.go:182] Loaded profile config "addons-167289": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 01:58:42.306753   25248 host.go:66] Checking if "addons-167289" exists ...
	I1119 01:58:42.307132   25248 cli_runner.go:164] Run: docker container inspect addons-167289 --format={{.State.Status}}
	I1119 01:58:42.328186   25248 ssh_runner.go:195] Run: systemctl --version
	I1119 01:58:42.328239   25248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-167289
	I1119 01:58:42.348330   25248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/addons-167289/id_rsa Username:docker}
	I1119 01:58:42.453148   25248 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 01:58:42.453252   25248 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 01:58:42.490080   25248 cri.go:89] found id: "2b3c875b37c34a1aaf8e79a105e8e53fae42de67a4a4a839e6099e2e76e3ee93"
	I1119 01:58:42.490122   25248 cri.go:89] found id: "0057cb6b6d59c2d741aceb29df5771b62c0e82207a84f69ac5154387cbd84153"
	I1119 01:58:42.490128   25248 cri.go:89] found id: "387526d34b521aa97915fbe3e7854312807b05167ee255ed3d4dfbf358eb18ab"
	I1119 01:58:42.490133   25248 cri.go:89] found id: "f44d066a2880c3b89fb901e65c68edf6462e6f5ee4704d445d70bab540e140db"
	I1119 01:58:42.490137   25248 cri.go:89] found id: "d46baa577b02a9113e070c4a0480941b3b25bbbcce455137088c83a4b640d69f"
	I1119 01:58:42.490142   25248 cri.go:89] found id: "3e7307111a0a7ff2319df0e4a44e2dfdd6899963934cd8f81e97fe79104558fe"
	I1119 01:58:42.490146   25248 cri.go:89] found id: "e4525045db437311150f979f145e5df2b15dba4a85832f3b40b56d9e95456c85"
	I1119 01:58:42.490150   25248 cri.go:89] found id: "320316320c36a31575ed518280c787f454599b6f6db11a50abd8a2b071eab8ce"
	I1119 01:58:42.490154   25248 cri.go:89] found id: "77230f6072332b89f67e0a13fc3e2f90a73b685df581bca576a4aa98a0393837"
	I1119 01:58:42.490166   25248 cri.go:89] found id: "4c4521da22d2eb06ed45356e3e80a96ea0146646cd996eb249b4381da1a14456"
	I1119 01:58:42.490170   25248 cri.go:89] found id: "6c5d7a569a83aee258230f3e4101efcec68212fb81bd79541a6db05f42d1a635"
	I1119 01:58:42.490174   25248 cri.go:89] found id: "c45598982d3b30077574919aa2f884686b6cc7cef2866a9077b7aaa5b63ec66f"
	I1119 01:58:42.490179   25248 cri.go:89] found id: "fc07b5bfc14386b4ffa6dbdfb46e833fb2891243713de31478929edea09648dc"
	I1119 01:58:42.490183   25248 cri.go:89] found id: "ee1592f353982b5c192b5c5fede23bebda0067235ac78605adf1748bd5b7a544"
	I1119 01:58:42.490186   25248 cri.go:89] found id: "139e05f21703a92685b5f507816a8e38f914726f6ef0aa1b6cace7a7821c19fa"
	I1119 01:58:42.490200   25248 cri.go:89] found id: "28a0d1d0eb9de3e99faff9a60e034a22f2550e1d457b6e8c119f0069bb8c2dfb"
	I1119 01:58:42.490206   25248 cri.go:89] found id: "2d72765f224ed9ae16aaaf83613ea7845ee7fcd72d8bd4046856b1ae4dcbe2f6"
	I1119 01:58:42.490212   25248 cri.go:89] found id: "4f2a8fdefa3a921e57d39ce31bcba0663aef48e6cc812fddf8202cddb72408bc"
	I1119 01:58:42.490216   25248 cri.go:89] found id: "fc26589821a5be523a8e87f633779e2f88a1591ee75b009035709790a1af3b55"
	I1119 01:58:42.490220   25248 cri.go:89] found id: "76265018a97b054599e84272f03b9d9c1514d776b5f51869b14493d4baae8651"
	I1119 01:58:42.490224   25248 cri.go:89] found id: "2c19d6084be53e03b3e3ac1e879db7b855bd9d15968238b086b01debd7d271ef"
	I1119 01:58:42.490227   25248 cri.go:89] found id: "caf07801af8b3aabb5593451ab4fa658a97ce71604a86b5c1c38b3f9bde7e597"
	I1119 01:58:42.490231   25248 cri.go:89] found id: "32f9d499f63a2a4e0bec492e9ab5fd077b296adbf502d37c0814b310de798245"
	I1119 01:58:42.490235   25248 cri.go:89] found id: "c9553756abec4455cfb96c8ccb4ac24ebed143c24ffb9c08df7706240e482bac"
	I1119 01:58:42.490238   25248 cri.go:89] found id: ""
	I1119 01:58:42.490303   25248 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 01:58:42.508587   25248 out.go:203] 
	W1119 01:58:42.509947   25248 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T01:58:42Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T01:58:42Z" level=error msg="open /run/runc: no such file or directory"
	
	W1119 01:58:42.510483   25248 out.go:285] * 
	* 
	W1119 01:58:42.514018   25248 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1119 01:58:42.515577   25248 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-amd64 -p addons-167289 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.43s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (148.91s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-167289 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-167289 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-167289 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [e485c5b8-b260-486f-a666-0047f9a58a21] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [e485c5b8-b260-486f-a666-0047f9a58a21] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.002660339s
I1119 01:58:51.482543   14634 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-167289 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-167289 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m16.601960939s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-167289 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-167289 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-167289
helpers_test.go:243: (dbg) docker inspect addons-167289:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1203decade436e5a99285de163c7efbd34c8e628ce3a9b855c75dee85825f799",
	        "Created": "2025-11-19T01:56:47.824620544Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 16629,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T01:56:47.855109279Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a368e3d71517ce17114afb6c9921965419df972dd0e2d32a9973a8946f0910a3",
	        "ResolvConfPath": "/var/lib/docker/containers/1203decade436e5a99285de163c7efbd34c8e628ce3a9b855c75dee85825f799/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1203decade436e5a99285de163c7efbd34c8e628ce3a9b855c75dee85825f799/hostname",
	        "HostsPath": "/var/lib/docker/containers/1203decade436e5a99285de163c7efbd34c8e628ce3a9b855c75dee85825f799/hosts",
	        "LogPath": "/var/lib/docker/containers/1203decade436e5a99285de163c7efbd34c8e628ce3a9b855c75dee85825f799/1203decade436e5a99285de163c7efbd34c8e628ce3a9b855c75dee85825f799-json.log",
	        "Name": "/addons-167289",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-167289:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-167289",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1203decade436e5a99285de163c7efbd34c8e628ce3a9b855c75dee85825f799",
	                "LowerDir": "/var/lib/docker/overlay2/b471f4f520aa3dc0a41036926c583ed8e6e188a581176a3fbf87df8a1904e828-init/diff:/var/lib/docker/overlay2/10dcbd04dd53463a9d8dc947ef22df9efa1b3a4d07707317c3869fa39d5b22a7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b471f4f520aa3dc0a41036926c583ed8e6e188a581176a3fbf87df8a1904e828/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b471f4f520aa3dc0a41036926c583ed8e6e188a581176a3fbf87df8a1904e828/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b471f4f520aa3dc0a41036926c583ed8e6e188a581176a3fbf87df8a1904e828/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-167289",
	                "Source": "/var/lib/docker/volumes/addons-167289/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-167289",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-167289",
	                "name.minikube.sigs.k8s.io": "addons-167289",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "30075995df53fcbc60726c257c3c4e14775bf796b4eabe17b742c7954574fb34",
	            "SandboxKey": "/var/run/docker/netns/30075995df53",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "Networks": {
	                "addons-167289": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "73031ed466e39bf7065dcfea0bf4a86593aa5b31f488c3d7233eef8fc32876c2",
	                    "EndpointID": "a685cc51d7b6d26d63caa7a962150b0642e1f8b1b8fa14e41ac97ff3d341da54",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "86:3b:cd:26:f8:d9",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-167289",
	                        "1203decade43"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-167289 -n addons-167289
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-167289 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-167289 logs -n 25: (1.070244475s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ --download-only -p binary-mirror-075616 --alsologtostderr --binary-mirror http://127.0.0.1:42109 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-075616 │ jenkins │ v1.37.0 │ 19 Nov 25 01:56 UTC │                     │
	│ delete  │ -p binary-mirror-075616                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-075616 │ jenkins │ v1.37.0 │ 19 Nov 25 01:56 UTC │ 19 Nov 25 01:56 UTC │
	│ addons  │ enable dashboard -p addons-167289                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-167289        │ jenkins │ v1.37.0 │ 19 Nov 25 01:56 UTC │                     │
	│ addons  │ disable dashboard -p addons-167289                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-167289        │ jenkins │ v1.37.0 │ 19 Nov 25 01:56 UTC │                     │
	│ start   │ -p addons-167289 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-167289        │ jenkins │ v1.37.0 │ 19 Nov 25 01:56 UTC │ 19 Nov 25 01:58 UTC │
	│ addons  │ addons-167289 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-167289        │ jenkins │ v1.37.0 │ 19 Nov 25 01:58 UTC │                     │
	│ addons  │ addons-167289 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-167289        │ jenkins │ v1.37.0 │ 19 Nov 25 01:58 UTC │                     │
	│ addons  │ enable headlamp -p addons-167289 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-167289        │ jenkins │ v1.37.0 │ 19 Nov 25 01:58 UTC │                     │
	│ addons  │ addons-167289 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-167289        │ jenkins │ v1.37.0 │ 19 Nov 25 01:58 UTC │                     │
	│ addons  │ addons-167289 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-167289        │ jenkins │ v1.37.0 │ 19 Nov 25 01:58 UTC │                     │
	│ addons  │ addons-167289 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-167289        │ jenkins │ v1.37.0 │ 19 Nov 25 01:58 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-167289                                                                                                                                                                                                                                                                                                                                                                                           │ addons-167289        │ jenkins │ v1.37.0 │ 19 Nov 25 01:58 UTC │ 19 Nov 25 01:58 UTC │
	│ addons  │ addons-167289 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-167289        │ jenkins │ v1.37.0 │ 19 Nov 25 01:58 UTC │                     │
	│ addons  │ addons-167289 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-167289        │ jenkins │ v1.37.0 │ 19 Nov 25 01:58 UTC │                     │
	│ addons  │ addons-167289 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-167289        │ jenkins │ v1.37.0 │ 19 Nov 25 01:58 UTC │                     │
	│ ip      │ addons-167289 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-167289        │ jenkins │ v1.37.0 │ 19 Nov 25 01:58 UTC │ 19 Nov 25 01:58 UTC │
	│ addons  │ addons-167289 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-167289        │ jenkins │ v1.37.0 │ 19 Nov 25 01:58 UTC │                     │
	│ ssh     │ addons-167289 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-167289        │ jenkins │ v1.37.0 │ 19 Nov 25 01:58 UTC │                     │
	│ addons  │ addons-167289 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-167289        │ jenkins │ v1.37.0 │ 19 Nov 25 01:58 UTC │                     │
	│ addons  │ addons-167289 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-167289        │ jenkins │ v1.37.0 │ 19 Nov 25 01:58 UTC │                     │
	│ ssh     │ addons-167289 ssh cat /opt/local-path-provisioner/pvc-18f6647e-e829-4ddb-8ec4-c1f78ee38e49_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-167289        │ jenkins │ v1.37.0 │ 19 Nov 25 01:59 UTC │ 19 Nov 25 01:59 UTC │
	│ addons  │ addons-167289 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-167289        │ jenkins │ v1.37.0 │ 19 Nov 25 01:59 UTC │                     │
	│ addons  │ addons-167289 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-167289        │ jenkins │ v1.37.0 │ 19 Nov 25 01:59 UTC │                     │
	│ addons  │ addons-167289 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-167289        │ jenkins │ v1.37.0 │ 19 Nov 25 01:59 UTC │                     │
	│ ip      │ addons-167289 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-167289        │ jenkins │ v1.37.0 │ 19 Nov 25 02:01 UTC │ 19 Nov 25 02:01 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 01:56:24
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 01:56:24.835519   15977 out.go:360] Setting OutFile to fd 1 ...
	I1119 01:56:24.835621   15977 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 01:56:24.835634   15977 out.go:374] Setting ErrFile to fd 2...
	I1119 01:56:24.835640   15977 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 01:56:24.835847   15977 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-11126/.minikube/bin
	I1119 01:56:24.836328   15977 out.go:368] Setting JSON to false
	I1119 01:56:24.837186   15977 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2332,"bootTime":1763515053,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 01:56:24.837236   15977 start.go:143] virtualization: kvm guest
	I1119 01:56:24.839177   15977 out.go:179] * [addons-167289] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1119 01:56:24.840496   15977 out.go:179]   - MINIKUBE_LOCATION=21924
	I1119 01:56:24.840495   15977 notify.go:221] Checking for updates...
	I1119 01:56:24.843183   15977 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 01:56:24.844307   15977 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21924-11126/kubeconfig
	I1119 01:56:24.845373   15977 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-11126/.minikube
	I1119 01:56:24.846332   15977 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1119 01:56:24.847339   15977 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 01:56:24.848496   15977 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 01:56:24.869880   15977 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1119 01:56:24.870002   15977 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 01:56:24.921236   15977 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-11-19 01:56:24.912714626 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652060160 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 01:56:24.921337   15977 docker.go:319] overlay module found
	I1119 01:56:24.922896   15977 out.go:179] * Using the docker driver based on user configuration
	I1119 01:56:24.923876   15977 start.go:309] selected driver: docker
	I1119 01:56:24.923890   15977 start.go:930] validating driver "docker" against <nil>
	I1119 01:56:24.923903   15977 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 01:56:24.924382   15977 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 01:56:24.979268   15977 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-11-19 01:56:24.969260228 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652060160 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 01:56:24.979418   15977 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1119 01:56:24.979628   15977 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 01:56:24.981157   15977 out.go:179] * Using Docker driver with root privileges
	I1119 01:56:24.982247   15977 cni.go:84] Creating CNI manager for ""
	I1119 01:56:24.982309   15977 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 01:56:24.982319   15977 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1119 01:56:24.982368   15977 start.go:353] cluster config:
	{Name:addons-167289 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-167289 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1119 01:56:24.983537   15977 out.go:179] * Starting "addons-167289" primary control-plane node in "addons-167289" cluster
	I1119 01:56:24.984652   15977 cache.go:134] Beginning downloading kic base image for docker with crio
	I1119 01:56:24.985660   15977 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1119 01:56:24.986682   15977 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 01:56:24.986708   15977 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21924-11126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1119 01:56:24.986716   15977 cache.go:65] Caching tarball of preloaded images
	I1119 01:56:24.986760   15977 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1119 01:56:24.986822   15977 preload.go:238] Found /home/jenkins/minikube-integration/21924-11126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1119 01:56:24.986837   15977 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1119 01:56:24.987201   15977 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/addons-167289/config.json ...
	I1119 01:56:24.987228   15977 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/addons-167289/config.json: {Name:mk1bcbc978f0a0c87baad2741a38ecbb57ca6166 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 01:56:25.001654   15977 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a to local cache
	I1119 01:56:25.001796   15977 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local cache directory
	I1119 01:56:25.001813   15977 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local cache directory, skipping pull
	I1119 01:56:25.001817   15977 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in cache, skipping pull
	I1119 01:56:25.001823   15977 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a as a tarball
	I1119 01:56:25.001835   15977 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a from local cache
	I1119 01:56:36.716217   15977 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a from cached tarball
	I1119 01:56:36.716279   15977 cache.go:243] Successfully downloaded all kic artifacts
	I1119 01:56:36.716361   15977 start.go:360] acquireMachinesLock for addons-167289: {Name:mk52be43c4a7bd92286dd93acb8c958bd94a02c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 01:56:36.716498   15977 start.go:364] duration metric: took 97.124µs to acquireMachinesLock for "addons-167289"
	I1119 01:56:36.716530   15977 start.go:93] Provisioning new machine with config: &{Name:addons-167289 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-167289 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 01:56:36.716602   15977 start.go:125] createHost starting for "" (driver="docker")
	I1119 01:56:36.718205   15977 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1119 01:56:36.718463   15977 start.go:159] libmachine.API.Create for "addons-167289" (driver="docker")
	I1119 01:56:36.718491   15977 client.go:173] LocalClient.Create starting
	I1119 01:56:36.718575   15977 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem
	I1119 01:56:36.877819   15977 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/cert.pem
	I1119 01:56:37.014895   15977 cli_runner.go:164] Run: docker network inspect addons-167289 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1119 01:56:37.032008   15977 cli_runner.go:211] docker network inspect addons-167289 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1119 01:56:37.032091   15977 network_create.go:284] running [docker network inspect addons-167289] to gather additional debugging logs...
	I1119 01:56:37.032107   15977 cli_runner.go:164] Run: docker network inspect addons-167289
	W1119 01:56:37.046961   15977 cli_runner.go:211] docker network inspect addons-167289 returned with exit code 1
	I1119 01:56:37.046984   15977 network_create.go:287] error running [docker network inspect addons-167289]: docker network inspect addons-167289: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-167289 not found
	I1119 01:56:37.046995   15977 network_create.go:289] output of [docker network inspect addons-167289]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-167289 not found
	
	** /stderr **
	I1119 01:56:37.047088   15977 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 01:56:37.061486   15977 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001c85550}
	I1119 01:56:37.061525   15977 network_create.go:124] attempt to create docker network addons-167289 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1119 01:56:37.061568   15977 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-167289 addons-167289
	I1119 01:56:37.104298   15977 network_create.go:108] docker network addons-167289 192.168.49.0/24 created
	I1119 01:56:37.104322   15977 kic.go:121] calculated static IP "192.168.49.2" for the "addons-167289" container
	I1119 01:56:37.104394   15977 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1119 01:56:37.119042   15977 cli_runner.go:164] Run: docker volume create addons-167289 --label name.minikube.sigs.k8s.io=addons-167289 --label created_by.minikube.sigs.k8s.io=true
	I1119 01:56:37.135147   15977 oci.go:103] Successfully created a docker volume addons-167289
	I1119 01:56:37.135235   15977 cli_runner.go:164] Run: docker run --rm --name addons-167289-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-167289 --entrypoint /usr/bin/test -v addons-167289:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -d /var/lib
	I1119 01:56:43.552192   15977 cli_runner.go:217] Completed: docker run --rm --name addons-167289-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-167289 --entrypoint /usr/bin/test -v addons-167289:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -d /var/lib: (6.416919548s)
	I1119 01:56:43.552221   15977 oci.go:107] Successfully prepared a docker volume addons-167289
	I1119 01:56:43.552278   15977 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 01:56:43.552300   15977 kic.go:194] Starting extracting preloaded images to volume ...
	I1119 01:56:43.552344   15977 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21924-11126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-167289:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir
	I1119 01:56:47.757292   15977 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21924-11126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-167289:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir: (4.204905972s)
	I1119 01:56:47.757322   15977 kic.go:203] duration metric: took 4.205016937s to extract preloaded images to volume ...
	W1119 01:56:47.757414   15977 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1119 01:56:47.757482   15977 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1119 01:56:47.757521   15977 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1119 01:56:47.810662   15977 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-167289 --name addons-167289 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-167289 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-167289 --network addons-167289 --ip 192.168.49.2 --volume addons-167289:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a
	I1119 01:56:48.100601   15977 cli_runner.go:164] Run: docker container inspect addons-167289 --format={{.State.Running}}
	I1119 01:56:48.118079   15977 cli_runner.go:164] Run: docker container inspect addons-167289 --format={{.State.Status}}
	I1119 01:56:48.135602   15977 cli_runner.go:164] Run: docker exec addons-167289 stat /var/lib/dpkg/alternatives/iptables
	I1119 01:56:48.178763   15977 oci.go:144] the created container "addons-167289" has a running status.
	I1119 01:56:48.178794   15977 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21924-11126/.minikube/machines/addons-167289/id_rsa...
	I1119 01:56:48.375807   15977 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21924-11126/.minikube/machines/addons-167289/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1119 01:56:48.413219   15977 cli_runner.go:164] Run: docker container inspect addons-167289 --format={{.State.Status}}
	I1119 01:56:48.431260   15977 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1119 01:56:48.431373   15977 kic_runner.go:114] Args: [docker exec --privileged addons-167289 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1119 01:56:48.481205   15977 cli_runner.go:164] Run: docker container inspect addons-167289 --format={{.State.Status}}
	I1119 01:56:48.499111   15977 machine.go:94] provisionDockerMachine start ...
	I1119 01:56:48.499200   15977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-167289
	I1119 01:56:48.517051   15977 main.go:143] libmachine: Using SSH client type: native
	I1119 01:56:48.517290   15977 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1119 01:56:48.517308   15977 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 01:56:48.648770   15977 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-167289
	
	I1119 01:56:48.648800   15977 ubuntu.go:182] provisioning hostname "addons-167289"
	I1119 01:56:48.648895   15977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-167289
	I1119 01:56:48.666768   15977 main.go:143] libmachine: Using SSH client type: native
	I1119 01:56:48.666973   15977 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1119 01:56:48.666991   15977 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-167289 && echo "addons-167289" | sudo tee /etc/hostname
	I1119 01:56:48.804457   15977 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-167289
	
	I1119 01:56:48.804546   15977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-167289
	I1119 01:56:48.822232   15977 main.go:143] libmachine: Using SSH client type: native
	I1119 01:56:48.822490   15977 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1119 01:56:48.822515   15977 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-167289' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-167289/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-167289' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 01:56:48.949082   15977 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 01:56:48.949109   15977 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21924-11126/.minikube CaCertPath:/home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21924-11126/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21924-11126/.minikube}
	I1119 01:56:48.949136   15977 ubuntu.go:190] setting up certificates
	I1119 01:56:48.949146   15977 provision.go:84] configureAuth start
	I1119 01:56:48.949190   15977 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-167289
	I1119 01:56:48.965294   15977 provision.go:143] copyHostCerts
	I1119 01:56:48.965361   15977 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21924-11126/.minikube/ca.pem (1082 bytes)
	I1119 01:56:48.965510   15977 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21924-11126/.minikube/cert.pem (1123 bytes)
	I1119 01:56:48.965592   15977 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21924-11126/.minikube/key.pem (1675 bytes)
	I1119 01:56:48.965658   15977 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21924-11126/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca-key.pem org=jenkins.addons-167289 san=[127.0.0.1 192.168.49.2 addons-167289 localhost minikube]
	I1119 01:56:49.292476   15977 provision.go:177] copyRemoteCerts
	I1119 01:56:49.292537   15977 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 01:56:49.292569   15977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-167289
	I1119 01:56:49.309206   15977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/addons-167289/id_rsa Username:docker}
	I1119 01:56:49.401495   15977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1119 01:56:49.418622   15977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1119 01:56:49.433842   15977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1119 01:56:49.448900   15977 provision.go:87] duration metric: took 499.743835ms to configureAuth
	I1119 01:56:49.448919   15977 ubuntu.go:206] setting minikube options for container-runtime
	I1119 01:56:49.449060   15977 config.go:182] Loaded profile config "addons-167289": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 01:56:49.449151   15977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-167289
	I1119 01:56:49.465614   15977 main.go:143] libmachine: Using SSH client type: native
	I1119 01:56:49.465830   15977 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1119 01:56:49.465852   15977 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 01:56:49.716393   15977 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 01:56:49.716419   15977 machine.go:97] duration metric: took 1.217285484s to provisionDockerMachine
	I1119 01:56:49.716455   15977 client.go:176] duration metric: took 12.997932509s to LocalClient.Create
	I1119 01:56:49.716478   15977 start.go:167] duration metric: took 12.998013526s to libmachine.API.Create "addons-167289"
	I1119 01:56:49.716488   15977 start.go:293] postStartSetup for "addons-167289" (driver="docker")
	I1119 01:56:49.716499   15977 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 01:56:49.716570   15977 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 01:56:49.716630   15977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-167289
	I1119 01:56:49.733630   15977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/addons-167289/id_rsa Username:docker}
	I1119 01:56:49.826980   15977 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 01:56:49.830119   15977 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 01:56:49.830148   15977 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 01:56:49.830157   15977 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-11126/.minikube/addons for local assets ...
	I1119 01:56:49.830211   15977 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-11126/.minikube/files for local assets ...
	I1119 01:56:49.830234   15977 start.go:296] duration metric: took 113.740218ms for postStartSetup
	I1119 01:56:49.830499   15977 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-167289
	I1119 01:56:49.846945   15977 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/addons-167289/config.json ...
	I1119 01:56:49.847164   15977 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 01:56:49.847201   15977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-167289
	I1119 01:56:49.863138   15977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/addons-167289/id_rsa Username:docker}
	I1119 01:56:49.951724   15977 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 01:56:49.955824   15977 start.go:128] duration metric: took 13.239208195s to createHost
	I1119 01:56:49.955848   15977 start.go:83] releasing machines lock for "addons-167289", held for 13.239332596s
	I1119 01:56:49.955912   15977 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-167289
	I1119 01:56:49.972152   15977 ssh_runner.go:195] Run: cat /version.json
	I1119 01:56:49.972192   15977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-167289
	I1119 01:56:49.972241   15977 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 01:56:49.972308   15977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-167289
	I1119 01:56:49.989273   15977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/addons-167289/id_rsa Username:docker}
	I1119 01:56:49.989286   15977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/addons-167289/id_rsa Username:docker}
	I1119 01:56:50.130548   15977 ssh_runner.go:195] Run: systemctl --version
	I1119 01:56:50.136181   15977 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 01:56:50.167016   15977 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 01:56:50.170961   15977 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 01:56:50.171015   15977 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 01:56:50.194209   15977 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1119 01:56:50.194227   15977 start.go:496] detecting cgroup driver to use...
	I1119 01:56:50.194256   15977 detect.go:190] detected "systemd" cgroup driver on host os
	I1119 01:56:50.194296   15977 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 01:56:50.208696   15977 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 01:56:50.219215   15977 docker.go:218] disabling cri-docker service (if available) ...
	I1119 01:56:50.219269   15977 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 01:56:50.233605   15977 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 01:56:50.248880   15977 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 01:56:50.319820   15977 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 01:56:50.402925   15977 docker.go:234] disabling docker service ...
	I1119 01:56:50.402991   15977 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 01:56:50.419273   15977 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 01:56:50.430488   15977 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 01:56:50.505447   15977 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 01:56:50.578770   15977 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 01:56:50.590282   15977 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 01:56:50.603110   15977 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 01:56:50.603161   15977 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 01:56:50.612695   15977 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1119 01:56:50.612748   15977 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 01:56:50.620813   15977 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 01:56:50.629133   15977 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 01:56:50.636927   15977 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 01:56:50.644134   15977 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 01:56:50.651707   15977 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 01:56:50.663500   15977 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 01:56:50.671047   15977 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 01:56:50.677262   15977 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1119 01:56:50.677323   15977 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1119 01:56:50.688271   15977 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 01:56:50.694774   15977 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 01:56:50.766633   15977 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 01:56:50.892540   15977 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 01:56:50.892614   15977 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 01:56:50.896082   15977 start.go:564] Will wait 60s for crictl version
	I1119 01:56:50.896130   15977 ssh_runner.go:195] Run: which crictl
	I1119 01:56:50.899382   15977 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 01:56:50.922696   15977 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1119 01:56:50.922798   15977 ssh_runner.go:195] Run: crio --version
	I1119 01:56:50.948424   15977 ssh_runner.go:195] Run: crio --version
	I1119 01:56:50.975450   15977 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1119 01:56:50.976599   15977 cli_runner.go:164] Run: docker network inspect addons-167289 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 01:56:50.992869   15977 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1119 01:56:50.996515   15977 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 01:56:51.005831   15977 kubeadm.go:884] updating cluster {Name:addons-167289 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-167289 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 01:56:51.005948   15977 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 01:56:51.006003   15977 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 01:56:51.033312   15977 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 01:56:51.033329   15977 crio.go:433] Images already preloaded, skipping extraction
	I1119 01:56:51.033366   15977 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 01:56:51.056752   15977 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 01:56:51.056785   15977 cache_images.go:86] Images are preloaded, skipping loading
	I1119 01:56:51.056794   15977 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1119 01:56:51.056900   15977 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-167289 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-167289 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 01:56:51.056977   15977 ssh_runner.go:195] Run: crio config
	I1119 01:56:51.098229   15977 cni.go:84] Creating CNI manager for ""
	I1119 01:56:51.098252   15977 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 01:56:51.098270   15977 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 01:56:51.098297   15977 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-167289 NodeName:addons-167289 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 01:56:51.098451   15977 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-167289"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 01:56:51.098516   15977 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 01:56:51.105858   15977 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 01:56:51.105913   15977 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 01:56:51.112704   15977 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1119 01:56:51.123894   15977 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 01:56:51.137256   15977 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1119 01:56:51.148104   15977 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1119 01:56:51.151240   15977 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 01:56:51.159854   15977 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 01:56:51.232683   15977 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 01:56:51.253355   15977 certs.go:69] Setting up /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/addons-167289 for IP: 192.168.49.2
	I1119 01:56:51.253377   15977 certs.go:195] generating shared ca certs ...
	I1119 01:56:51.253395   15977 certs.go:227] acquiring lock for ca certs: {Name:mkfa94aafd627ee4c1a185e05aa520339d3c22d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 01:56:51.253539   15977 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21924-11126/.minikube/ca.key
	I1119 01:56:51.387457   15977 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-11126/.minikube/ca.crt ...
	I1119 01:56:51.387485   15977 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/ca.crt: {Name:mk7624723baa4df6f75e33083adc8e75b09c347a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 01:56:51.387637   15977 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-11126/.minikube/ca.key ...
	I1119 01:56:51.387648   15977 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/ca.key: {Name:mk47a85d97d3efb8b54a9dd78a07e03f896e8596 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 01:56:51.387713   15977 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21924-11126/.minikube/proxy-client-ca.key
	I1119 01:56:51.550105   15977 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-11126/.minikube/proxy-client-ca.crt ...
	I1119 01:56:51.550129   15977 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/proxy-client-ca.crt: {Name:mk03a052d228a2f9c94e95bc8cad8b9967faf6ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 01:56:51.550272   15977 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-11126/.minikube/proxy-client-ca.key ...
	I1119 01:56:51.550283   15977 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/proxy-client-ca.key: {Name:mk19bf33386de4407e06afcb75512ea7f42aac60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 01:56:51.550348   15977 certs.go:257] generating profile certs ...
	I1119 01:56:51.550404   15977 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/addons-167289/client.key
	I1119 01:56:51.550417   15977 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/addons-167289/client.crt with IP's: []
	I1119 01:56:51.695176   15977 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/addons-167289/client.crt ...
	I1119 01:56:51.695208   15977 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/addons-167289/client.crt: {Name:mk6f9d68eb87ffbb51f2d7fcd64ccd78e64e75f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 01:56:51.695342   15977 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/addons-167289/client.key ...
	I1119 01:56:51.695352   15977 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/addons-167289/client.key: {Name:mk90e076134c3c0597612cf93c59d3b0dee365e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 01:56:51.695416   15977 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/addons-167289/apiserver.key.92bd18a6
	I1119 01:56:51.695445   15977 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/addons-167289/apiserver.crt.92bd18a6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1119 01:56:51.984762   15977 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/addons-167289/apiserver.crt.92bd18a6 ...
	I1119 01:56:51.984793   15977 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/addons-167289/apiserver.crt.92bd18a6: {Name:mk9306c98cc8a4b6a63c4265bf787d8931ce2151 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 01:56:51.984974   15977 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/addons-167289/apiserver.key.92bd18a6 ...
	I1119 01:56:51.984991   15977 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/addons-167289/apiserver.key.92bd18a6: {Name:mke9067f3c97ee52a1131efd3fdbc03a2bba0c6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 01:56:51.985087   15977 certs.go:382] copying /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/addons-167289/apiserver.crt.92bd18a6 -> /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/addons-167289/apiserver.crt
	I1119 01:56:51.985189   15977 certs.go:386] copying /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/addons-167289/apiserver.key.92bd18a6 -> /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/addons-167289/apiserver.key
	I1119 01:56:51.985263   15977 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/addons-167289/proxy-client.key
	I1119 01:56:51.985288   15977 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/addons-167289/proxy-client.crt with IP's: []
	I1119 01:56:52.024885   15977 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/addons-167289/proxy-client.crt ...
	I1119 01:56:52.024905   15977 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/addons-167289/proxy-client.crt: {Name:mk4d1f77c92fd7452dd9b21a161766302088c130 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 01:56:52.025023   15977 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/addons-167289/proxy-client.key ...
	I1119 01:56:52.025037   15977 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/addons-167289/proxy-client.key: {Name:mk6d3dac815ead4929147d5e91be6528764b981d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 01:56:52.025218   15977 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca-key.pem (1671 bytes)
	I1119 01:56:52.025261   15977 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem (1082 bytes)
	I1119 01:56:52.025293   15977 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/cert.pem (1123 bytes)
	I1119 01:56:52.025326   15977 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/key.pem (1675 bytes)
	I1119 01:56:52.025860   15977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 01:56:52.042924   15977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1119 01:56:52.059753   15977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 01:56:52.076910   15977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1119 01:56:52.092670   15977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/addons-167289/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1119 01:56:52.108952   15977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/addons-167289/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1119 01:56:52.124768   15977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/addons-167289/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 01:56:52.140225   15977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/addons-167289/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1119 01:56:52.155506   15977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 01:56:52.172564   15977 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 01:56:52.183696   15977 ssh_runner.go:195] Run: openssl version
	I1119 01:56:52.189175   15977 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 01:56:52.198834   15977 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 01:56:52.202092   15977 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 01:56 /usr/share/ca-certificates/minikubeCA.pem
	I1119 01:56:52.202127   15977 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 01:56:52.235336   15977 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 01:56:52.243361   15977 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 01:56:52.246608   15977 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1119 01:56:52.246658   15977 kubeadm.go:401] StartCluster: {Name:addons-167289 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-167289 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 01:56:52.246733   15977 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 01:56:52.246774   15977 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 01:56:52.270758   15977 cri.go:89] found id: ""
	I1119 01:56:52.270809   15977 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 01:56:52.277954   15977 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1119 01:56:52.284942   15977 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1119 01:56:52.284981   15977 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1119 01:56:52.291670   15977 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1119 01:56:52.291700   15977 kubeadm.go:158] found existing configuration files:
	
	I1119 01:56:52.291737   15977 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1119 01:56:52.298752   15977 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1119 01:56:52.298801   15977 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1119 01:56:52.305205   15977 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1119 01:56:52.311800   15977 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1119 01:56:52.311835   15977 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1119 01:56:52.318060   15977 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1119 01:56:52.324767   15977 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1119 01:56:52.324811   15977 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1119 01:56:52.331194   15977 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1119 01:56:52.337787   15977 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1119 01:56:52.337818   15977 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1119 01:56:52.344295   15977 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1119 01:56:52.377648   15977 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1119 01:56:52.377707   15977 kubeadm.go:319] [preflight] Running pre-flight checks
	I1119 01:56:52.396146   15977 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1119 01:56:52.396223   15977 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1119 01:56:52.396255   15977 kubeadm.go:319] OS: Linux
	I1119 01:56:52.396308   15977 kubeadm.go:319] CGROUPS_CPU: enabled
	I1119 01:56:52.396362   15977 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1119 01:56:52.396414   15977 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1119 01:56:52.396510   15977 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1119 01:56:52.396558   15977 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1119 01:56:52.396658   15977 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1119 01:56:52.396746   15977 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1119 01:56:52.396820   15977 kubeadm.go:319] CGROUPS_IO: enabled
	I1119 01:56:52.447127   15977 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1119 01:56:52.447253   15977 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1119 01:56:52.447391   15977 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1119 01:56:52.453951   15977 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1119 01:56:52.455977   15977 out.go:252]   - Generating certificates and keys ...
	I1119 01:56:52.456078   15977 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1119 01:56:52.456179   15977 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1119 01:56:52.616978   15977 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1119 01:56:52.691989   15977 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1119 01:56:52.877079   15977 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1119 01:56:53.267797   15977 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1119 01:56:53.532944   15977 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1119 01:56:53.533123   15977 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-167289 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1119 01:56:53.714275   15977 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1119 01:56:53.714473   15977 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-167289 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1119 01:56:53.910145   15977 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1119 01:56:54.092452   15977 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1119 01:56:54.505627   15977 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1119 01:56:54.505697   15977 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1119 01:56:54.705083   15977 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1119 01:56:54.827841   15977 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1119 01:56:55.203616   15977 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1119 01:56:56.065668   15977 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1119 01:56:56.299017   15977 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1119 01:56:56.299457   15977 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1119 01:56:56.302974   15977 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1119 01:56:56.305843   15977 out.go:252]   - Booting up control plane ...
	I1119 01:56:56.305956   15977 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1119 01:56:56.306070   15977 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1119 01:56:56.306185   15977 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1119 01:56:56.317461   15977 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1119 01:56:56.317582   15977 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1119 01:56:56.324424   15977 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1119 01:56:56.324924   15977 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1119 01:56:56.324992   15977 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1119 01:56:56.411922   15977 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1119 01:56:56.412038   15977 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1119 01:56:56.913498   15977 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.569186ms
	I1119 01:56:56.916327   15977 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1119 01:56:56.916412   15977 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1119 01:56:56.916545   15977 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1119 01:56:56.916631   15977 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1119 01:56:58.253703   15977 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.337270629s
	I1119 01:56:59.109663   15977 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.193280797s
	I1119 01:57:00.917456   15977 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001043439s
	I1119 01:57:00.928067   15977 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1119 01:57:00.936953   15977 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1119 01:57:00.944485   15977 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1119 01:57:00.944727   15977 kubeadm.go:319] [mark-control-plane] Marking the node addons-167289 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1119 01:57:00.951995   15977 kubeadm.go:319] [bootstrap-token] Using token: 0jiben.vo8mj3kr3cd8jvp6
	I1119 01:57:00.953214   15977 out.go:252]   - Configuring RBAC rules ...
	I1119 01:57:00.953349   15977 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1119 01:57:00.955996   15977 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1119 01:57:00.960359   15977 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1119 01:57:00.963400   15977 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1119 01:57:00.965302   15977 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1119 01:57:00.967389   15977 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1119 01:57:01.322588   15977 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1119 01:57:01.735061   15977 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1119 01:57:02.322282   15977 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1119 01:57:02.322984   15977 kubeadm.go:319] 
	I1119 01:57:02.323043   15977 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1119 01:57:02.323070   15977 kubeadm.go:319] 
	I1119 01:57:02.323166   15977 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1119 01:57:02.323176   15977 kubeadm.go:319] 
	I1119 01:57:02.323205   15977 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1119 01:57:02.323273   15977 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1119 01:57:02.323346   15977 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1119 01:57:02.323362   15977 kubeadm.go:319] 
	I1119 01:57:02.323480   15977 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1119 01:57:02.323491   15977 kubeadm.go:319] 
	I1119 01:57:02.323563   15977 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1119 01:57:02.323570   15977 kubeadm.go:319] 
	I1119 01:57:02.323649   15977 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1119 01:57:02.323753   15977 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1119 01:57:02.323863   15977 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1119 01:57:02.323878   15977 kubeadm.go:319] 
	I1119 01:57:02.324001   15977 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1119 01:57:02.324121   15977 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1119 01:57:02.324137   15977 kubeadm.go:319] 
	I1119 01:57:02.324246   15977 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 0jiben.vo8mj3kr3cd8jvp6 \
	I1119 01:57:02.324401   15977 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4165d7789cb5b41e2fdbecffedd9aece71a7c390b7f673c978e979d9f1b4ab32 \
	I1119 01:57:02.324457   15977 kubeadm.go:319] 	--control-plane 
	I1119 01:57:02.324472   15977 kubeadm.go:319] 
	I1119 01:57:02.324586   15977 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1119 01:57:02.324595   15977 kubeadm.go:319] 
	I1119 01:57:02.324716   15977 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 0jiben.vo8mj3kr3cd8jvp6 \
	I1119 01:57:02.324881   15977 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4165d7789cb5b41e2fdbecffedd9aece71a7c390b7f673c978e979d9f1b4ab32 
	I1119 01:57:02.326276   15977 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1119 01:57:02.326445   15977 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1119 01:57:02.326482   15977 cni.go:84] Creating CNI manager for ""
	I1119 01:57:02.326498   15977 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 01:57:02.328327   15977 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1119 01:57:02.329374   15977 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1119 01:57:02.333275   15977 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1119 01:57:02.333289   15977 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1119 01:57:02.345721   15977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1119 01:57:02.536658   15977 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1119 01:57:02.536732   15977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 01:57:02.536765   15977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-167289 minikube.k8s.io/updated_at=2025_11_19T01_57_02_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=ebd49efe8b7d44f89fc96e21beffcd64dfee8277 minikube.k8s.io/name=addons-167289 minikube.k8s.io/primary=true
	I1119 01:57:02.618814   15977 ops.go:34] apiserver oom_adj: -16
	I1119 01:57:02.618856   15977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 01:57:03.119948   15977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 01:57:03.619280   15977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 01:57:04.119229   15977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 01:57:04.619579   15977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 01:57:05.119283   15977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 01:57:05.619695   15977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 01:57:06.119494   15977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 01:57:06.619864   15977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 01:57:07.119524   15977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 01:57:07.181709   15977 kubeadm.go:1114] duration metric: took 4.645035612s to wait for elevateKubeSystemPrivileges
	I1119 01:57:07.181743   15977 kubeadm.go:403] duration metric: took 14.935090313s to StartCluster
	I1119 01:57:07.181762   15977 settings.go:142] acquiring lock: {Name:mk39bd61177f2f8808b442f427cccc3032092975 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 01:57:07.181875   15977 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21924-11126/kubeconfig
	I1119 01:57:07.182220   15977 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/kubeconfig: {Name:mkb0d0ec188d51add3fa67ae624c9c11581068d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 01:57:07.182390   15977 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1119 01:57:07.182415   15977 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 01:57:07.182499   15977 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1119 01:57:07.182634   15977 config.go:182] Loaded profile config "addons-167289": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 01:57:07.182651   15977 addons.go:70] Setting cloud-spanner=true in profile "addons-167289"
	I1119 01:57:07.182658   15977 addons.go:70] Setting ingress-dns=true in profile "addons-167289"
	I1119 01:57:07.182660   15977 addons.go:70] Setting volumesnapshots=true in profile "addons-167289"
	I1119 01:57:07.182675   15977 addons.go:239] Setting addon cloud-spanner=true in "addons-167289"
	I1119 01:57:07.182634   15977 addons.go:70] Setting yakd=true in profile "addons-167289"
	I1119 01:57:07.182683   15977 addons.go:70] Setting inspektor-gadget=true in profile "addons-167289"
	I1119 01:57:07.182706   15977 addons.go:70] Setting metrics-server=true in profile "addons-167289"
	I1119 01:57:07.182716   15977 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-167289"
	I1119 01:57:07.182720   15977 addons.go:70] Setting storage-provisioner=true in profile "addons-167289"
	I1119 01:57:07.182694   15977 addons.go:239] Setting addon yakd=true in "addons-167289"
	I1119 01:57:07.182741   15977 addons.go:239] Setting addon storage-provisioner=true in "addons-167289"
	I1119 01:57:07.182747   15977 addons.go:70] Setting gcp-auth=true in profile "addons-167289"
	I1119 01:57:07.182725   15977 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-167289"
	I1119 01:57:07.182764   15977 host.go:66] Checking if "addons-167289" exists ...
	I1119 01:57:07.182765   15977 mustload.go:66] Loading cluster: addons-167289
	I1119 01:57:07.182772   15977 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-167289"
	I1119 01:57:07.182798   15977 host.go:66] Checking if "addons-167289" exists ...
	I1119 01:57:07.182809   15977 host.go:66] Checking if "addons-167289" exists ...
	I1119 01:57:07.182709   15977 host.go:66] Checking if "addons-167289" exists ...
	I1119 01:57:07.182653   15977 addons.go:70] Setting volcano=true in profile "addons-167289"
	I1119 01:57:07.182934   15977 addons.go:239] Setting addon volcano=true in "addons-167289"
	I1119 01:57:07.182948   15977 config.go:182] Loaded profile config "addons-167289": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 01:57:07.182963   15977 host.go:66] Checking if "addons-167289" exists ...
	I1119 01:57:07.183181   15977 cli_runner.go:164] Run: docker container inspect addons-167289 --format={{.State.Status}}
	I1119 01:57:07.183284   15977 cli_runner.go:164] Run: docker container inspect addons-167289 --format={{.State.Status}}
	I1119 01:57:07.183311   15977 cli_runner.go:164] Run: docker container inspect addons-167289 --format={{.State.Status}}
	I1119 01:57:07.183343   15977 cli_runner.go:164] Run: docker container inspect addons-167289 --format={{.State.Status}}
	I1119 01:57:07.183385   15977 cli_runner.go:164] Run: docker container inspect addons-167289 --format={{.State.Status}}
	I1119 01:57:07.182724   15977 addons.go:239] Setting addon metrics-server=true in "addons-167289"
	I1119 01:57:07.183417   15977 host.go:66] Checking if "addons-167289" exists ...
	I1119 01:57:07.182633   15977 addons.go:70] Setting ingress=true in profile "addons-167289"
	I1119 01:57:07.183606   15977 addons.go:239] Setting addon ingress=true in "addons-167289"
	I1119 01:57:07.183647   15977 host.go:66] Checking if "addons-167289" exists ...
	I1119 01:57:07.183935   15977 cli_runner.go:164] Run: docker container inspect addons-167289 --format={{.State.Status}}
	I1119 01:57:07.182650   15977 addons.go:70] Setting registry=true in profile "addons-167289"
	I1119 01:57:07.183979   15977 addons.go:239] Setting addon registry=true in "addons-167289"
	I1119 01:57:07.184006   15977 host.go:66] Checking if "addons-167289" exists ...
	I1119 01:57:07.184094   15977 cli_runner.go:164] Run: docker container inspect addons-167289 --format={{.State.Status}}
	I1119 01:57:07.184451   15977 out.go:179] * Verifying Kubernetes components...
	I1119 01:57:07.184923   15977 cli_runner.go:164] Run: docker container inspect addons-167289 --format={{.State.Status}}
	I1119 01:57:07.182676   15977 addons.go:239] Setting addon ingress-dns=true in "addons-167289"
	I1119 01:57:07.185472   15977 host.go:66] Checking if "addons-167289" exists ...
	I1119 01:57:07.182680   15977 addons.go:239] Setting addon volumesnapshots=true in "addons-167289"
	I1119 01:57:07.185720   15977 host.go:66] Checking if "addons-167289" exists ...
	I1119 01:57:07.185932   15977 cli_runner.go:164] Run: docker container inspect addons-167289 --format={{.State.Status}}
	I1119 01:57:07.186200   15977 cli_runner.go:164] Run: docker container inspect addons-167289 --format={{.State.Status}}
	I1119 01:57:07.183313   15977 cli_runner.go:164] Run: docker container inspect addons-167289 --format={{.State.Status}}
	I1119 01:57:07.182728   15977 addons.go:239] Setting addon inspektor-gadget=true in "addons-167289"
	I1119 01:57:07.187558   15977 host.go:66] Checking if "addons-167289" exists ...
	I1119 01:57:07.182748   15977 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-167289"
	I1119 01:57:07.191791   15977 host.go:66] Checking if "addons-167289" exists ...
	I1119 01:57:07.182645   15977 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-167289"
	I1119 01:57:07.182645   15977 addons.go:70] Setting registry-creds=true in profile "addons-167289"
	I1119 01:57:07.182696   15977 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-167289"
	I1119 01:57:07.182757   15977 addons.go:70] Setting default-storageclass=true in profile "addons-167289"
	I1119 01:57:07.191879   15977 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-167289"
	I1119 01:57:07.191934   15977 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-167289"
	I1119 01:57:07.191953   15977 host.go:66] Checking if "addons-167289" exists ...
	I1119 01:57:07.192443   15977 cli_runner.go:164] Run: docker container inspect addons-167289 --format={{.State.Status}}
	I1119 01:57:07.192471   15977 addons.go:239] Setting addon registry-creds=true in "addons-167289"
	I1119 01:57:07.192501   15977 host.go:66] Checking if "addons-167289" exists ...
	I1119 01:57:07.192968   15977 cli_runner.go:164] Run: docker container inspect addons-167289 --format={{.State.Status}}
	I1119 01:57:07.193293   15977 cli_runner.go:164] Run: docker container inspect addons-167289 --format={{.State.Status}}
	I1119 01:57:07.193688   15977 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-167289"
	I1119 01:57:07.194018   15977 cli_runner.go:164] Run: docker container inspect addons-167289 --format={{.State.Status}}
	I1119 01:57:07.194104   15977 cli_runner.go:164] Run: docker container inspect addons-167289 --format={{.State.Status}}
	I1119 01:57:07.194304   15977 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 01:57:07.197366   15977 cli_runner.go:164] Run: docker container inspect addons-167289 --format={{.State.Status}}
	I1119 01:57:07.232175   15977 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.43
	I1119 01:57:07.233775   15977 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1119 01:57:07.233794   15977 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1119 01:57:07.233853   15977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-167289
	I1119 01:57:07.241465   15977 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1119 01:57:07.252652   15977 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1119 01:57:07.254964   15977 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1119 01:57:07.255007   15977 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1119 01:57:07.255076   15977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-167289
	I1119 01:57:07.261562   15977 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1119 01:57:07.261644   15977 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1119 01:57:07.262685   15977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-167289
	W1119 01:57:07.267538   15977 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1119 01:57:07.270512   15977 out.go:179]   - Using image docker.io/registry:3.0.0
	I1119 01:57:07.271544   15977 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1119 01:57:07.272974   15977 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1119 01:57:07.273033   15977 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1119 01:57:07.271835   15977 host.go:66] Checking if "addons-167289" exists ...
	I1119 01:57:07.274124   15977 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1119 01:57:07.274144   15977 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1119 01:57:07.274203   15977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-167289
	I1119 01:57:07.275338   15977 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 01:57:07.276517   15977 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 01:57:07.276535   15977 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 01:57:07.276638   15977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-167289
	I1119 01:57:07.276845   15977 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1119 01:57:07.278030   15977 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1119 01:57:07.278092   15977 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1119 01:57:07.278168   15977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-167289
	I1119 01:57:07.283628   15977 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1119 01:57:07.283706   15977 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1119 01:57:07.284156   15977 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1119 01:57:07.285284   15977 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1119 01:57:07.285306   15977 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1119 01:57:07.285353   15977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-167289
	I1119 01:57:07.286681   15977 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1119 01:57:07.286700   15977 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1119 01:57:07.286701   15977 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1119 01:57:07.286712   15977 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1119 01:57:07.286758   15977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-167289
	I1119 01:57:07.286782   15977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-167289
	I1119 01:57:07.299378   15977 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-167289"
	I1119 01:57:07.299570   15977 host.go:66] Checking if "addons-167289" exists ...
	I1119 01:57:07.300867   15977 cli_runner.go:164] Run: docker container inspect addons-167289 --format={{.State.Status}}
	I1119 01:57:07.307562   15977 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1119 01:57:07.308025   15977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/addons-167289/id_rsa Username:docker}
	I1119 01:57:07.310876   15977 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1119 01:57:07.310899   15977 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1119 01:57:07.310950   15977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-167289
	I1119 01:57:07.318483   15977 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1119 01:57:07.318637   15977 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1119 01:57:07.320241   15977 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1119 01:57:07.320260   15977 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1119 01:57:07.320318   15977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-167289
	I1119 01:57:07.321269   15977 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1119 01:57:07.321289   15977 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1119 01:57:07.321337   15977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-167289
	I1119 01:57:07.326578   15977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/addons-167289/id_rsa Username:docker}
	I1119 01:57:07.334114   15977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/addons-167289/id_rsa Username:docker}
	I1119 01:57:07.335037   15977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/addons-167289/id_rsa Username:docker}
	I1119 01:57:07.335136   15977 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1119 01:57:07.336384   15977 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1119 01:57:07.337621   15977 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1119 01:57:07.338864   15977 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1119 01:57:07.339959   15977 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1119 01:57:07.340253   15977 addons.go:239] Setting addon default-storageclass=true in "addons-167289"
	I1119 01:57:07.340361   15977 host.go:66] Checking if "addons-167289" exists ...
	I1119 01:57:07.341406   15977 cli_runner.go:164] Run: docker container inspect addons-167289 --format={{.State.Status}}
	I1119 01:57:07.342123   15977 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1119 01:57:07.345704   15977 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1119 01:57:07.346996   15977 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1119 01:57:07.347582   15977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/addons-167289/id_rsa Username:docker}
	I1119 01:57:07.348025   15977 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1119 01:57:07.348049   15977 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1119 01:57:07.348102   15977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-167289
	I1119 01:57:07.355282   15977 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1119 01:57:07.364095   15977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/addons-167289/id_rsa Username:docker}
	I1119 01:57:07.367803   15977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/addons-167289/id_rsa Username:docker}
	I1119 01:57:07.372263   15977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/addons-167289/id_rsa Username:docker}
	I1119 01:57:07.389094   15977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/addons-167289/id_rsa Username:docker}
	I1119 01:57:07.393662   15977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/addons-167289/id_rsa Username:docker}
	I1119 01:57:07.407349   15977 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1119 01:57:07.410139   15977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/addons-167289/id_rsa Username:docker}
	I1119 01:57:07.410792   15977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/addons-167289/id_rsa Username:docker}
	I1119 01:57:07.411871   15977 out.go:179]   - Using image docker.io/busybox:stable
	I1119 01:57:07.411994   15977 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 01:57:07.412011   15977 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 01:57:07.412058   15977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-167289
	I1119 01:57:07.412996   15977 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1119 01:57:07.413036   15977 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1119 01:57:07.413082   15977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-167289
	I1119 01:57:07.414011   15977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/addons-167289/id_rsa Username:docker}
	W1119 01:57:07.415689   15977 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1119 01:57:07.415750   15977 retry.go:31] will retry after 330.560053ms: ssh: handshake failed: EOF
	I1119 01:57:07.419615   15977 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 01:57:07.443361   15977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/addons-167289/id_rsa Username:docker}
	I1119 01:57:07.453636   15977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/addons-167289/id_rsa Username:docker}
	I1119 01:57:07.500323   15977 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1119 01:57:07.501005   15977 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1119 01:57:07.509680   15977 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1119 01:57:07.509705   15977 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1119 01:57:07.509885   15977 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1119 01:57:07.509899   15977 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1119 01:57:07.521120   15977 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1119 01:57:07.525648   15977 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1119 01:57:07.526733   15977 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 01:57:07.530958   15977 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1119 01:57:07.538081   15977 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1119 01:57:07.538098   15977 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1119 01:57:07.543467   15977 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1119 01:57:07.543527   15977 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1119 01:57:07.552704   15977 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1119 01:57:07.570847   15977 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1119 01:57:07.570873   15977 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1119 01:57:07.574582   15977 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1119 01:57:07.574602   15977 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1119 01:57:07.577366   15977 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1119 01:57:07.591333   15977 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1119 01:57:07.591412   15977 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1119 01:57:07.594959   15977 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1119 01:57:07.603495   15977 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1119 01:57:07.603837   15977 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 01:57:07.613237   15977 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1119 01:57:07.613259   15977 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1119 01:57:07.619850   15977 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1119 01:57:07.619874   15977 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1119 01:57:07.645512   15977 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1119 01:57:07.645554   15977 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1119 01:57:07.656396   15977 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1119 01:57:07.656424   15977 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1119 01:57:07.668559   15977 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1119 01:57:07.668588   15977 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1119 01:57:07.702657   15977 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1119 01:57:07.710399   15977 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1119 01:57:07.710439   15977 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1119 01:57:07.719333   15977 node_ready.go:35] waiting up to 6m0s for node "addons-167289" to be "Ready" ...
	I1119 01:57:07.719779   15977 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1119 01:57:07.727571   15977 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1119 01:57:07.727599   15977 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1119 01:57:07.754760   15977 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1119 01:57:07.754808   15977 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1119 01:57:07.795532   15977 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1119 01:57:07.795559   15977 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1119 01:57:07.812210   15977 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1119 01:57:07.812234   15977 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1119 01:57:07.869769   15977 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1119 01:57:07.869803   15977 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1119 01:57:07.873364   15977 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1119 01:57:07.921856   15977 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1119 01:57:07.921884   15977 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1119 01:57:07.985659   15977 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1119 01:57:07.985684   15977 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1119 01:57:07.993671   15977 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1119 01:57:07.993761   15977 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1119 01:57:08.035890   15977 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1119 01:57:08.036001   15977 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1119 01:57:08.049831   15977 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1119 01:57:08.049856   15977 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1119 01:57:08.094498   15977 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1119 01:57:08.095977   15977 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1119 01:57:08.096045   15977 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1119 01:57:08.141880   15977 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1119 01:57:08.231633   15977 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-167289" context rescaled to 1 replicas
	I1119 01:57:08.657396   15977 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.156347683s)
	I1119 01:57:08.657455   15977 addons.go:480] Verifying addon ingress=true in "addons-167289"
	I1119 01:57:08.657495   15977 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (1.13634593s)
	I1119 01:57:08.657572   15977 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (1.131892919s)
	I1119 01:57:08.657661   15977 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.130902418s)
	I1119 01:57:08.657730   15977 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.126734908s)
	I1119 01:57:08.657791   15977 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (1.105064107s)
	I1119 01:57:08.657830   15977 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.080441497s)
	I1119 01:57:08.657845   15977 addons.go:480] Verifying addon registry=true in "addons-167289"
	I1119 01:57:08.657928   15977 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.062944454s)
	I1119 01:57:08.658010   15977 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.054450482s)
	I1119 01:57:08.658038   15977 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.054121439s)
	I1119 01:57:08.660054   15977 out.go:179] * Verifying registry addon...
	I1119 01:57:08.660056   15977 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-167289 service yakd-dashboard -n yakd-dashboard
	
	I1119 01:57:08.660063   15977 out.go:179] * Verifying ingress addon...
	I1119 01:57:08.662279   15977 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1119 01:57:08.662722   15977 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1119 01:57:08.664957   15977 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1119 01:57:08.664977   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:08.665071   15977 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1119 01:57:08.665091   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 01:57:08.665385   15977 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1119 01:57:09.104203   15977 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.230792489s)
	W1119 01:57:09.104258   15977 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1119 01:57:09.104279   15977 retry.go:31] will retry after 185.166277ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1119 01:57:09.104364   15977 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.009816355s)
	I1119 01:57:09.104403   15977 addons.go:480] Verifying addon csi-hostpath-driver=true in "addons-167289"
	I1119 01:57:09.104420   15977 addons.go:480] Verifying addon metrics-server=true in "addons-167289"
	I1119 01:57:09.106028   15977 out.go:179] * Verifying csi-hostpath-driver addon...
	I1119 01:57:09.107988   15977 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1119 01:57:09.110371   15977 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1119 01:57:09.110390   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:09.210416   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:09.210470   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:09.290139   15977 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1119 01:57:09.611278   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:09.665403   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:09.665508   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 01:57:09.721797   15977 node_ready.go:57] node "addons-167289" has "Ready":"False" status (will retry)
	I1119 01:57:10.110359   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:10.165602   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:10.165610   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:10.610748   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:10.664960   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:10.665052   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:11.110734   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:11.164693   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:11.164854   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:11.610879   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:11.665026   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:11.665219   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 01:57:11.722295   15977 node_ready.go:57] node "addons-167289" has "Ready":"False" status (will retry)
	I1119 01:57:11.728071   15977 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.43789244s)
	I1119 01:57:12.111092   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:12.164933   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:12.165143   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:12.611351   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:12.664411   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:12.665333   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:13.110919   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:13.164823   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:13.164993   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:13.610846   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:13.664813   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:13.664878   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:14.111267   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:14.165222   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:14.165366   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 01:57:14.222196   15977 node_ready.go:57] node "addons-167289" has "Ready":"False" status (will retry)
	I1119 01:57:14.611198   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:14.665344   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:14.665347   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:14.887830   15977 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1119 01:57:14.887895   15977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-167289
	I1119 01:57:14.906331   15977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/addons-167289/id_rsa Username:docker}
	I1119 01:57:15.010676   15977 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1119 01:57:15.022034   15977 addons.go:239] Setting addon gcp-auth=true in "addons-167289"
	I1119 01:57:15.022086   15977 host.go:66] Checking if "addons-167289" exists ...
	I1119 01:57:15.022408   15977 cli_runner.go:164] Run: docker container inspect addons-167289 --format={{.State.Status}}
	I1119 01:57:15.038501   15977 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1119 01:57:15.038549   15977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-167289
	I1119 01:57:15.054562   15977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/addons-167289/id_rsa Username:docker}
	I1119 01:57:15.110492   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:15.145546   15977 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1119 01:57:15.146670   15977 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1119 01:57:15.147677   15977 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1119 01:57:15.147691   15977 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1119 01:57:15.159311   15977 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1119 01:57:15.159328   15977 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1119 01:57:15.165647   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:15.165710   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:15.171019   15977 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1119 01:57:15.171036   15977 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1119 01:57:15.182458   15977 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1119 01:57:15.457426   15977 addons.go:480] Verifying addon gcp-auth=true in "addons-167289"
	I1119 01:57:15.458906   15977 out.go:179] * Verifying gcp-auth addon...
	I1119 01:57:15.460827   15977 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1119 01:57:15.462859   15977 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1119 01:57:15.462885   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:15.611161   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:15.665208   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:15.665350   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:15.963587   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:16.111124   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:16.165225   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:16.165384   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 01:57:16.222330   15977 node_ready.go:57] node "addons-167289" has "Ready":"False" status (will retry)
	I1119 01:57:16.464324   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:16.610867   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:16.664758   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:16.664955   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:16.963644   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:17.111054   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:17.165165   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:17.165349   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:17.463694   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:17.610947   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:17.665055   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:17.665095   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:17.963360   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:18.110706   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:18.164733   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:18.164818   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:18.463108   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:18.610425   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:18.664413   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:18.665385   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 01:57:18.722291   15977 node_ready.go:57] node "addons-167289" has "Ready":"False" status (will retry)
	I1119 01:57:18.963725   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:19.110985   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:19.165079   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:19.165326   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:19.463461   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:19.610614   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:19.664625   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:19.664781   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:19.964046   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:20.110529   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:20.164574   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:20.164812   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:20.464265   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:20.610736   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:20.664879   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:20.665024   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:20.963180   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:21.110498   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:21.164530   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:21.165460   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 01:57:21.222481   15977 node_ready.go:57] node "addons-167289" has "Ready":"False" status (will retry)
	I1119 01:57:21.463770   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:21.611049   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:21.665168   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:21.665392   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:21.963612   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:22.110926   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:22.165080   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:22.165262   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:22.463616   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:22.610941   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:22.664912   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:22.665061   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:22.963399   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:23.110685   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:23.164879   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:23.164954   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:23.463217   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:23.610313   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:23.665517   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:23.665662   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 01:57:23.721351   15977 node_ready.go:57] node "addons-167289" has "Ready":"False" status (will retry)
	I1119 01:57:23.963724   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:24.110976   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:24.165122   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:24.165309   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:24.463740   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:24.610740   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:24.664739   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:24.664914   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:24.963768   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:25.111124   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:25.165112   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:25.165251   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:25.463659   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:25.611091   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:25.665016   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:25.665231   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 01:57:25.722202   15977 node_ready.go:57] node "addons-167289" has "Ready":"False" status (will retry)
	I1119 01:57:25.963579   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:26.110810   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:26.164881   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:26.165029   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:26.463704   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:26.611322   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:26.665235   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:26.665508   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:26.963690   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:27.111163   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:27.165169   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:27.165235   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:27.463805   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:27.610990   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:27.665068   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:27.665194   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:27.963518   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:28.111047   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:28.165262   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:28.165450   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 01:57:28.222566   15977 node_ready.go:57] node "addons-167289" has "Ready":"False" status (will retry)
	I1119 01:57:28.463931   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:28.611316   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:28.664557   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:28.665420   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:28.963830   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:29.111142   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:29.165265   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:29.165491   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:29.464117   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:29.610233   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:29.665350   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:29.665364   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:29.963752   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:30.110900   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:30.165107   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:30.165160   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:30.463817   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:30.610943   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:30.665015   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:30.665179   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 01:57:30.722132   15977 node_ready.go:57] node "addons-167289" has "Ready":"False" status (will retry)
	I1119 01:57:30.963485   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:31.110672   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:31.164676   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:31.164896   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:31.463212   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:31.610250   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:31.665450   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:31.665608   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:31.963675   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:32.111031   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:32.165292   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:32.165516   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:32.463709   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:32.611028   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:32.665061   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:32.665221   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:32.963493   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:33.110769   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:33.164908   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:33.165070   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 01:57:33.221967   15977 node_ready.go:57] node "addons-167289" has "Ready":"False" status (will retry)
	I1119 01:57:33.463228   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:33.610336   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:33.664413   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:33.665316   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:33.963737   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:34.111114   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:34.165498   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:34.165614   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:34.463965   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:34.611197   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:34.665253   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:34.665445   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:34.963678   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:35.110689   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:35.164678   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:35.164878   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:35.464014   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:35.610171   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:35.665385   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:35.665550   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 01:57:35.721231   15977 node_ready.go:57] node "addons-167289" has "Ready":"False" status (will retry)
	I1119 01:57:35.963484   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:36.110698   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:36.164532   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:36.164639   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:36.463752   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:36.610887   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:36.664803   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:36.664997   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:36.963018   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:37.110283   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:37.165310   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:37.165455   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:37.463959   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:37.611063   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:37.665051   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:37.665156   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 01:57:37.721910   15977 node_ready.go:57] node "addons-167289" has "Ready":"False" status (will retry)
	I1119 01:57:37.963104   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:38.110475   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:38.164653   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:38.165388   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:38.464073   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:38.610261   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:38.665258   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:38.665311   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:38.963516   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:39.110739   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:39.164756   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:39.164931   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:39.463094   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:39.610317   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:39.665210   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:39.665367   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 01:57:39.722147   15977 node_ready.go:57] node "addons-167289" has "Ready":"False" status (will retry)
	I1119 01:57:39.963626   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:40.110747   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:40.164890   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:40.165061   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:40.463406   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:40.610609   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:40.664582   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:40.665561   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:40.963365   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:41.110800   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:41.164994   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:41.165106   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:41.463491   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:41.610630   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:41.665043   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:41.665218   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 01:57:41.722254   15977 node_ready.go:57] node "addons-167289" has "Ready":"False" status (will retry)
	I1119 01:57:41.963656   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:42.110908   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:42.165029   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:42.165162   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:42.463778   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:42.610823   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:42.664757   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:42.664991   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:42.963136   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:43.110265   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:43.165221   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:43.165371   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:43.463702   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:43.610741   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:43.665025   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:43.665035   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:43.963237   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:44.110762   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:44.164899   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:44.164994   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 01:57:44.222019   15977 node_ready.go:57] node "addons-167289" has "Ready":"False" status (will retry)
	I1119 01:57:44.463627   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:44.610908   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:44.664888   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:44.664957   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:44.963227   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:45.110342   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:45.165483   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:45.165543   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:45.464086   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:45.611173   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:45.665147   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:45.665279   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:45.963732   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:46.111054   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:46.165101   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:46.165279   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 01:57:46.222315   15977 node_ready.go:57] node "addons-167289" has "Ready":"False" status (will retry)
	I1119 01:57:46.463828   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:46.611142   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:46.665347   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:46.665485   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:46.963534   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:47.110867   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:47.164738   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:47.164923   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:47.463288   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:47.610267   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:47.665198   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:47.665345   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:47.963946   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:48.111217   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:48.165306   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:48.165399   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 01:57:48.222361   15977 node_ready.go:57] node "addons-167289" has "Ready":"False" status (will retry)
	I1119 01:57:48.463720   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:48.610840   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:48.664834   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:48.664957   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:48.964543   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:49.110795   15977 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1119 01:57:49.110815   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:49.168730   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:49.168973   15977 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1119 01:57:49.168988   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:49.222737   15977 node_ready.go:49] node "addons-167289" is "Ready"
	I1119 01:57:49.222773   15977 node_ready.go:38] duration metric: took 41.503395794s for node "addons-167289" to be "Ready" ...
	I1119 01:57:49.222790   15977 api_server.go:52] waiting for apiserver process to appear ...
	I1119 01:57:49.222844   15977 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 01:57:49.238948   15977 api_server.go:72] duration metric: took 42.056477802s to wait for apiserver process to appear ...
	I1119 01:57:49.239024   15977 api_server.go:88] waiting for apiserver healthz status ...
	I1119 01:57:49.239047   15977 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1119 01:57:49.244392   15977 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1119 01:57:49.245489   15977 api_server.go:141] control plane version: v1.34.1
	I1119 01:57:49.245519   15977 api_server.go:131] duration metric: took 6.485232ms to wait for apiserver health ...
	I1119 01:57:49.245529   15977 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 01:57:49.269492   15977 system_pods.go:59] 20 kube-system pods found
	I1119 01:57:49.269534   15977 system_pods.go:61] "amd-gpu-device-plugin-cmmr7" [a0938c28-80c2-4166-8f36-4747dc5172b0] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1119 01:57:49.269555   15977 system_pods.go:61] "coredns-66bc5c9577-xb5hd" [83885124-6b05-4e64-8764-56f1ebccbc5b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 01:57:49.269567   15977 system_pods.go:61] "csi-hostpath-attacher-0" [86eb0765-79ab-4572-b5a8-a93869132f95] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1119 01:57:49.269575   15977 system_pods.go:61] "csi-hostpath-resizer-0" [0ce8e6a4-9cb5-48fd-b476-4e5e2f1d1fef] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1119 01:57:49.269588   15977 system_pods.go:61] "csi-hostpathplugin-m4svl" [eec068d1-82e3-48ab-9138-f9a4fb6e0ec0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1119 01:57:49.269595   15977 system_pods.go:61] "etcd-addons-167289" [2416d7e9-627f-4e54-a742-4e6fa6d16027] Running
	I1119 01:57:49.269602   15977 system_pods.go:61] "kindnet-cf2ws" [dece1234-512d-45b8-84de-da6d63aca86d] Running
	I1119 01:57:49.269616   15977 system_pods.go:61] "kube-apiserver-addons-167289" [16a98896-3b33-4612-9f81-d401f375bc30] Running
	I1119 01:57:49.269623   15977 system_pods.go:61] "kube-controller-manager-addons-167289" [441bdd3c-a38f-4cbc-b00e-f2e40476ac8d] Running
	I1119 01:57:49.269631   15977 system_pods.go:61] "kube-ingress-dns-minikube" [7dfa0119-3fc1-40f7-832e-88c59236450d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1119 01:57:49.269636   15977 system_pods.go:61] "kube-proxy-lrvxh" [1614ffa7-27d9-4b0a-a7c5-3273b69aa8f9] Running
	I1119 01:57:49.269641   15977 system_pods.go:61] "kube-scheduler-addons-167289" [9abbf69d-beb6-4a30-aaeb-a85cca56ad6c] Running
	I1119 01:57:49.269650   15977 system_pods.go:61] "metrics-server-85b7d694d7-j62rx" [a1ece533-9783-484c-94f2-ffb5b35757a1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1119 01:57:49.269661   15977 system_pods.go:61] "nvidia-device-plugin-daemonset-sb8hx" [d9fdbfe8-df6b-4329-ba9d-8ce33b033a74] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1119 01:57:49.269672   15977 system_pods.go:61] "registry-6b586f9694-fvk8h" [c2e887e7-9fa8-44be-baf1-e7067f024b2b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1119 01:57:49.269679   15977 system_pods.go:61] "registry-creds-764b6fb674-85l2k" [4dc568dd-9122-493f-a53a-1829913774ed] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1119 01:57:49.269686   15977 system_pods.go:61] "registry-proxy-7s98h" [c3c89bae-0110-4220-bc88-c3dfb3496f53] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1119 01:57:49.269693   15977 system_pods.go:61] "snapshot-controller-7d9fbc56b8-q5bjz" [a1198c36-b279-486e-98ad-1cd9940f4663] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1119 01:57:49.269708   15977 system_pods.go:61] "snapshot-controller-7d9fbc56b8-qfskz" [5212ea77-460c-423c-a29e-832bbc94b1d2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1119 01:57:49.269715   15977 system_pods.go:61] "storage-provisioner" [a7872dc1-9361-4083-b435-f50ea958395a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 01:57:49.269722   15977 system_pods.go:74] duration metric: took 24.185941ms to wait for pod list to return data ...
	I1119 01:57:49.269732   15977 default_sa.go:34] waiting for default service account to be created ...
	I1119 01:57:49.274170   15977 default_sa.go:45] found service account: "default"
	I1119 01:57:49.274195   15977 default_sa.go:55] duration metric: took 4.456926ms for default service account to be created ...
	I1119 01:57:49.274205   15977 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 01:57:49.373456   15977 system_pods.go:86] 20 kube-system pods found
	I1119 01:57:49.373501   15977 system_pods.go:89] "amd-gpu-device-plugin-cmmr7" [a0938c28-80c2-4166-8f36-4747dc5172b0] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1119 01:57:49.373514   15977 system_pods.go:89] "coredns-66bc5c9577-xb5hd" [83885124-6b05-4e64-8764-56f1ebccbc5b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 01:57:49.373524   15977 system_pods.go:89] "csi-hostpath-attacher-0" [86eb0765-79ab-4572-b5a8-a93869132f95] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1119 01:57:49.373533   15977 system_pods.go:89] "csi-hostpath-resizer-0" [0ce8e6a4-9cb5-48fd-b476-4e5e2f1d1fef] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1119 01:57:49.373550   15977 system_pods.go:89] "csi-hostpathplugin-m4svl" [eec068d1-82e3-48ab-9138-f9a4fb6e0ec0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1119 01:57:49.373560   15977 system_pods.go:89] "etcd-addons-167289" [2416d7e9-627f-4e54-a742-4e6fa6d16027] Running
	I1119 01:57:49.373569   15977 system_pods.go:89] "kindnet-cf2ws" [dece1234-512d-45b8-84de-da6d63aca86d] Running
	I1119 01:57:49.373576   15977 system_pods.go:89] "kube-apiserver-addons-167289" [16a98896-3b33-4612-9f81-d401f375bc30] Running
	I1119 01:57:49.373582   15977 system_pods.go:89] "kube-controller-manager-addons-167289" [441bdd3c-a38f-4cbc-b00e-f2e40476ac8d] Running
	I1119 01:57:49.373590   15977 system_pods.go:89] "kube-ingress-dns-minikube" [7dfa0119-3fc1-40f7-832e-88c59236450d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1119 01:57:49.373596   15977 system_pods.go:89] "kube-proxy-lrvxh" [1614ffa7-27d9-4b0a-a7c5-3273b69aa8f9] Running
	I1119 01:57:49.373603   15977 system_pods.go:89] "kube-scheduler-addons-167289" [9abbf69d-beb6-4a30-aaeb-a85cca56ad6c] Running
	I1119 01:57:49.373620   15977 system_pods.go:89] "metrics-server-85b7d694d7-j62rx" [a1ece533-9783-484c-94f2-ffb5b35757a1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1119 01:57:49.373632   15977 system_pods.go:89] "nvidia-device-plugin-daemonset-sb8hx" [d9fdbfe8-df6b-4329-ba9d-8ce33b033a74] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1119 01:57:49.373641   15977 system_pods.go:89] "registry-6b586f9694-fvk8h" [c2e887e7-9fa8-44be-baf1-e7067f024b2b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1119 01:57:49.373649   15977 system_pods.go:89] "registry-creds-764b6fb674-85l2k" [4dc568dd-9122-493f-a53a-1829913774ed] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1119 01:57:49.373695   15977 system_pods.go:89] "registry-proxy-7s98h" [c3c89bae-0110-4220-bc88-c3dfb3496f53] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1119 01:57:49.373709   15977 system_pods.go:89] "snapshot-controller-7d9fbc56b8-q5bjz" [a1198c36-b279-486e-98ad-1cd9940f4663] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1119 01:57:49.373719   15977 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qfskz" [5212ea77-460c-423c-a29e-832bbc94b1d2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1119 01:57:49.373729   15977 system_pods.go:89] "storage-provisioner" [a7872dc1-9361-4083-b435-f50ea958395a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 01:57:49.373751   15977 retry.go:31] will retry after 262.937655ms: missing components: kube-dns
	I1119 01:57:49.467536   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:49.610924   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:49.641140   15977 system_pods.go:86] 20 kube-system pods found
	I1119 01:57:49.641182   15977 system_pods.go:89] "amd-gpu-device-plugin-cmmr7" [a0938c28-80c2-4166-8f36-4747dc5172b0] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1119 01:57:49.641193   15977 system_pods.go:89] "coredns-66bc5c9577-xb5hd" [83885124-6b05-4e64-8764-56f1ebccbc5b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 01:57:49.641203   15977 system_pods.go:89] "csi-hostpath-attacher-0" [86eb0765-79ab-4572-b5a8-a93869132f95] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1119 01:57:49.641211   15977 system_pods.go:89] "csi-hostpath-resizer-0" [0ce8e6a4-9cb5-48fd-b476-4e5e2f1d1fef] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1119 01:57:49.641219   15977 system_pods.go:89] "csi-hostpathplugin-m4svl" [eec068d1-82e3-48ab-9138-f9a4fb6e0ec0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1119 01:57:49.641231   15977 system_pods.go:89] "etcd-addons-167289" [2416d7e9-627f-4e54-a742-4e6fa6d16027] Running
	I1119 01:57:49.641238   15977 system_pods.go:89] "kindnet-cf2ws" [dece1234-512d-45b8-84de-da6d63aca86d] Running
	I1119 01:57:49.641243   15977 system_pods.go:89] "kube-apiserver-addons-167289" [16a98896-3b33-4612-9f81-d401f375bc30] Running
	I1119 01:57:49.641248   15977 system_pods.go:89] "kube-controller-manager-addons-167289" [441bdd3c-a38f-4cbc-b00e-f2e40476ac8d] Running
	I1119 01:57:49.641258   15977 system_pods.go:89] "kube-ingress-dns-minikube" [7dfa0119-3fc1-40f7-832e-88c59236450d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1119 01:57:49.641271   15977 system_pods.go:89] "kube-proxy-lrvxh" [1614ffa7-27d9-4b0a-a7c5-3273b69aa8f9] Running
	I1119 01:57:49.641277   15977 system_pods.go:89] "kube-scheduler-addons-167289" [9abbf69d-beb6-4a30-aaeb-a85cca56ad6c] Running
	I1119 01:57:49.641284   15977 system_pods.go:89] "metrics-server-85b7d694d7-j62rx" [a1ece533-9783-484c-94f2-ffb5b35757a1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1119 01:57:49.641292   15977 system_pods.go:89] "nvidia-device-plugin-daemonset-sb8hx" [d9fdbfe8-df6b-4329-ba9d-8ce33b033a74] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1119 01:57:49.641301   15977 system_pods.go:89] "registry-6b586f9694-fvk8h" [c2e887e7-9fa8-44be-baf1-e7067f024b2b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1119 01:57:49.641309   15977 system_pods.go:89] "registry-creds-764b6fb674-85l2k" [4dc568dd-9122-493f-a53a-1829913774ed] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1119 01:57:49.641318   15977 system_pods.go:89] "registry-proxy-7s98h" [c3c89bae-0110-4220-bc88-c3dfb3496f53] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1119 01:57:49.641327   15977 system_pods.go:89] "snapshot-controller-7d9fbc56b8-q5bjz" [a1198c36-b279-486e-98ad-1cd9940f4663] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1119 01:57:49.641339   15977 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qfskz" [5212ea77-460c-423c-a29e-832bbc94b1d2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1119 01:57:49.641346   15977 system_pods.go:89] "storage-provisioner" [a7872dc1-9361-4083-b435-f50ea958395a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 01:57:49.641369   15977 retry.go:31] will retry after 249.569512ms: missing components: kube-dns
	I1119 01:57:49.665370   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:49.665456   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:49.895399   15977 system_pods.go:86] 20 kube-system pods found
	I1119 01:57:49.895451   15977 system_pods.go:89] "amd-gpu-device-plugin-cmmr7" [a0938c28-80c2-4166-8f36-4747dc5172b0] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1119 01:57:49.895458   15977 system_pods.go:89] "coredns-66bc5c9577-xb5hd" [83885124-6b05-4e64-8764-56f1ebccbc5b] Running
	I1119 01:57:49.895465   15977 system_pods.go:89] "csi-hostpath-attacher-0" [86eb0765-79ab-4572-b5a8-a93869132f95] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1119 01:57:49.895471   15977 system_pods.go:89] "csi-hostpath-resizer-0" [0ce8e6a4-9cb5-48fd-b476-4e5e2f1d1fef] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1119 01:57:49.895476   15977 system_pods.go:89] "csi-hostpathplugin-m4svl" [eec068d1-82e3-48ab-9138-f9a4fb6e0ec0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1119 01:57:49.895482   15977 system_pods.go:89] "etcd-addons-167289" [2416d7e9-627f-4e54-a742-4e6fa6d16027] Running
	I1119 01:57:49.895486   15977 system_pods.go:89] "kindnet-cf2ws" [dece1234-512d-45b8-84de-da6d63aca86d] Running
	I1119 01:57:49.895489   15977 system_pods.go:89] "kube-apiserver-addons-167289" [16a98896-3b33-4612-9f81-d401f375bc30] Running
	I1119 01:57:49.895495   15977 system_pods.go:89] "kube-controller-manager-addons-167289" [441bdd3c-a38f-4cbc-b00e-f2e40476ac8d] Running
	I1119 01:57:49.895502   15977 system_pods.go:89] "kube-ingress-dns-minikube" [7dfa0119-3fc1-40f7-832e-88c59236450d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1119 01:57:49.895505   15977 system_pods.go:89] "kube-proxy-lrvxh" [1614ffa7-27d9-4b0a-a7c5-3273b69aa8f9] Running
	I1119 01:57:49.895509   15977 system_pods.go:89] "kube-scheduler-addons-167289" [9abbf69d-beb6-4a30-aaeb-a85cca56ad6c] Running
	I1119 01:57:49.895515   15977 system_pods.go:89] "metrics-server-85b7d694d7-j62rx" [a1ece533-9783-484c-94f2-ffb5b35757a1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1119 01:57:49.895523   15977 system_pods.go:89] "nvidia-device-plugin-daemonset-sb8hx" [d9fdbfe8-df6b-4329-ba9d-8ce33b033a74] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1119 01:57:49.895539   15977 system_pods.go:89] "registry-6b586f9694-fvk8h" [c2e887e7-9fa8-44be-baf1-e7067f024b2b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1119 01:57:49.895546   15977 system_pods.go:89] "registry-creds-764b6fb674-85l2k" [4dc568dd-9122-493f-a53a-1829913774ed] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1119 01:57:49.895552   15977 system_pods.go:89] "registry-proxy-7s98h" [c3c89bae-0110-4220-bc88-c3dfb3496f53] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1119 01:57:49.895558   15977 system_pods.go:89] "snapshot-controller-7d9fbc56b8-q5bjz" [a1198c36-b279-486e-98ad-1cd9940f4663] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1119 01:57:49.895567   15977 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qfskz" [5212ea77-460c-423c-a29e-832bbc94b1d2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1119 01:57:49.895570   15977 system_pods.go:89] "storage-provisioner" [a7872dc1-9361-4083-b435-f50ea958395a] Running
	I1119 01:57:49.895580   15977 system_pods.go:126] duration metric: took 621.368616ms to wait for k8s-apps to be running ...
	I1119 01:57:49.895587   15977 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 01:57:49.895628   15977 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 01:57:49.908081   15977 system_svc.go:56] duration metric: took 12.487096ms WaitForService to wait for kubelet
	I1119 01:57:49.908110   15977 kubeadm.go:587] duration metric: took 42.725642841s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 01:57:49.908131   15977 node_conditions.go:102] verifying NodePressure condition ...
	I1119 01:57:49.910455   15977 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1119 01:57:49.910482   15977 node_conditions.go:123] node cpu capacity is 8
	I1119 01:57:49.910499   15977 node_conditions.go:105] duration metric: took 2.362295ms to run NodePressure ...
	I1119 01:57:49.910512   15977 start.go:242] waiting for startup goroutines ...
	I1119 01:57:49.963595   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:50.111951   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:50.166020   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:50.166088   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:50.463702   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:50.611593   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:50.712078   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:50.712734   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:50.964206   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:51.111981   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:51.165930   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:51.166097   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:51.464624   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:51.611756   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:51.665405   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:51.665464   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:51.964230   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:52.111212   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:52.165838   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:52.165956   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:52.463648   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:52.611908   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:52.665408   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:52.665489   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:52.964724   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:53.111562   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:53.164740   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:53.165691   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:53.464682   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:53.611777   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:53.665315   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:53.665448   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:53.964453   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:54.111655   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:54.165615   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:54.165708   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:54.463774   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:54.611312   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:54.666137   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:54.666138   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:54.964322   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:55.112766   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:55.170109   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:55.170230   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:55.467613   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:55.611729   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:55.665898   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:55.666365   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:55.964290   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:56.110977   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:56.165497   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:56.165548   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:56.464482   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:56.611594   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:56.664834   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:56.665873   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:56.963588   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:57.111084   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:57.165663   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:57.165678   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:57.463297   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:57.610592   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:57.664422   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:57.665501   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:57.964250   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:58.111019   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:58.165606   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:58.165645   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:58.464339   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:58.611288   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:58.665681   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:58.665728   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:58.964252   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:59.110663   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:59.165140   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:59.165264   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:59.464506   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:59.611494   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:59.665072   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:59.665760   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:59.964686   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:00.111387   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:00.165551   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:00.165593   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:00.464576   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:00.611598   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:00.665514   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:00.665848   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:00.963235   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:01.111011   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:01.165042   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:01.165087   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:01.463701   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:01.611591   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:01.665140   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:01.666907   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:01.963981   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:02.112028   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:02.165557   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:02.165625   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:02.464194   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:02.629052   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:02.743354   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:02.743695   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:02.963890   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:03.112163   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:03.165860   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:03.165877   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:03.463720   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:03.611768   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:03.665221   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:03.665595   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:03.963566   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:04.112087   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:04.165820   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:04.165927   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:04.463659   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:04.611603   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:04.711537   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:04.711738   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:04.965008   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:05.112580   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:05.165242   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:05.165331   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:05.474737   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:05.611526   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:05.664619   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:05.665450   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:05.964000   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:06.111050   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:06.165497   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:06.165567   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:06.464321   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:06.610972   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:06.710884   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:06.711074   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:06.964246   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:07.111460   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:07.166032   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:07.166101   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:07.463384   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:07.610983   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:07.664710   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:07.664833   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:07.963369   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:08.110942   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:08.165567   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:08.165608   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:08.463325   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:08.610856   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:08.664703   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:08.664791   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:08.963392   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:09.110909   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:09.164884   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:09.165073   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:09.463632   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:09.611746   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:09.665294   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:09.665356   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:09.965116   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:10.111324   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:10.165657   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:10.165697   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:10.463975   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:10.610983   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:10.665715   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:10.665712   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:10.964107   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:11.111976   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:11.165476   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:11.165498   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:11.463270   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:11.610992   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:11.665147   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:11.665219   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:11.963404   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:12.111184   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:12.165195   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:12.165238   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:12.463928   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:12.612111   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:12.665682   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:12.665682   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:12.964368   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:13.111027   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:13.165625   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:13.165774   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:13.464655   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:13.611923   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:13.665005   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:13.665122   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:13.963617   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:14.111406   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:14.166173   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:14.166233   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:14.463939   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:14.611841   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:14.664849   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:14.664967   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:14.965506   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:15.112555   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:15.166966   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:15.167019   15977 kapi.go:107] duration metric: took 1m6.504737546s to wait for kubernetes.io/minikube-addons=registry ...
	I1119 01:58:15.463702   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:15.670477   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:15.670550   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:16.023501   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:16.111088   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:16.165712   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:16.463912   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:16.611944   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:16.665540   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:16.967112   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:17.111537   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:17.166223   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:17.487293   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:17.652009   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:17.665117   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:17.963679   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:18.111449   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:18.211922   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:18.463733   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:18.611145   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:18.665882   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:18.963225   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:19.110947   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:19.165390   15977 kapi.go:107] duration metric: took 1m10.502663642s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1119 01:58:19.465330   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:19.611237   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:19.964698   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:20.111780   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:20.464075   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:20.663214   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:20.963974   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:21.111960   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:21.463665   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:21.612645   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:21.964016   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:22.110522   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:22.464327   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:22.611146   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:22.964400   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:23.111314   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:23.464333   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:23.611269   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:23.964354   15977 kapi.go:107] duration metric: took 1m8.503523861s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1119 01:58:23.966110   15977 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-167289 cluster.
	I1119 01:58:23.967865   15977 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1119 01:58:23.969091   15977 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1119 01:58:24.111774   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:24.610906   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:25.111503   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:25.610686   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:26.111825   15977 kapi.go:107] duration metric: took 1m17.003833892s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1119 01:58:26.113638   15977 out.go:179] * Enabled addons: cloud-spanner, ingress-dns, inspektor-gadget, storage-provisioner, nvidia-device-plugin, registry-creds, amd-gpu-device-plugin, yakd, default-storageclass, metrics-server, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1119 01:58:26.114696   15977 addons.go:515] duration metric: took 1m18.932211525s for enable addons: enabled=[cloud-spanner ingress-dns inspektor-gadget storage-provisioner nvidia-device-plugin registry-creds amd-gpu-device-plugin yakd default-storageclass metrics-server volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1119 01:58:26.114727   15977 start.go:247] waiting for cluster config update ...
	I1119 01:58:26.114744   15977 start.go:256] writing updated cluster config ...
	I1119 01:58:26.115011   15977 ssh_runner.go:195] Run: rm -f paused
	I1119 01:58:26.118723   15977 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 01:58:26.121356   15977 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-xb5hd" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 01:58:26.124800   15977 pod_ready.go:94] pod "coredns-66bc5c9577-xb5hd" is "Ready"
	I1119 01:58:26.124821   15977 pod_ready.go:86] duration metric: took 3.443331ms for pod "coredns-66bc5c9577-xb5hd" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 01:58:26.126273   15977 pod_ready.go:83] waiting for pod "etcd-addons-167289" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 01:58:26.129356   15977 pod_ready.go:94] pod "etcd-addons-167289" is "Ready"
	I1119 01:58:26.129372   15977 pod_ready.go:86] duration metric: took 3.082682ms for pod "etcd-addons-167289" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 01:58:26.130847   15977 pod_ready.go:83] waiting for pod "kube-apiserver-addons-167289" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 01:58:26.133885   15977 pod_ready.go:94] pod "kube-apiserver-addons-167289" is "Ready"
	I1119 01:58:26.133903   15977 pod_ready.go:86] duration metric: took 3.040935ms for pod "kube-apiserver-addons-167289" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 01:58:26.135403   15977 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-167289" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 01:58:26.521511   15977 pod_ready.go:94] pod "kube-controller-manager-addons-167289" is "Ready"
	I1119 01:58:26.521542   15977 pod_ready.go:86] duration metric: took 386.123241ms for pod "kube-controller-manager-addons-167289" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 01:58:26.721859   15977 pod_ready.go:83] waiting for pod "kube-proxy-lrvxh" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 01:58:27.121901   15977 pod_ready.go:94] pod "kube-proxy-lrvxh" is "Ready"
	I1119 01:58:27.121928   15977 pod_ready.go:86] duration metric: took 400.043351ms for pod "kube-proxy-lrvxh" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 01:58:27.322885   15977 pod_ready.go:83] waiting for pod "kube-scheduler-addons-167289" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 01:58:27.721338   15977 pod_ready.go:94] pod "kube-scheduler-addons-167289" is "Ready"
	I1119 01:58:27.721363   15977 pod_ready.go:86] duration metric: took 398.453182ms for pod "kube-scheduler-addons-167289" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 01:58:27.721374   15977 pod_ready.go:40] duration metric: took 1.602627462s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 01:58:27.762809   15977 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1119 01:58:27.765472   15977 out.go:179] * Done! kubectl is now configured to use "addons-167289" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 19 02:01:08 addons-167289 crio[778]: time="2025-11-19T02:01:08.506868779Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-hxtm2/POD" id=7e53a58a-4a31-46ff-b8a9-cf26f6eb8dfa name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 02:01:08 addons-167289 crio[778]: time="2025-11-19T02:01:08.506939317Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:01:08 addons-167289 crio[778]: time="2025-11-19T02:01:08.512657815Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-hxtm2 Namespace:default ID:ee47d24c6075287730d8359d401fb1181edcc03057ff8051010085948e9cf7a9 UID:f365e3f0-ebc6-497d-81ff-b1a6a66c68c3 NetNS:/var/run/netns/116a9761-1414-4966-a7b6-28f2336a4208 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0005209c0}] Aliases:map[]}"
	Nov 19 02:01:08 addons-167289 crio[778]: time="2025-11-19T02:01:08.512686186Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-hxtm2 to CNI network \"kindnet\" (type=ptp)"
	Nov 19 02:01:08 addons-167289 crio[778]: time="2025-11-19T02:01:08.522843084Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-hxtm2 Namespace:default ID:ee47d24c6075287730d8359d401fb1181edcc03057ff8051010085948e9cf7a9 UID:f365e3f0-ebc6-497d-81ff-b1a6a66c68c3 NetNS:/var/run/netns/116a9761-1414-4966-a7b6-28f2336a4208 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0005209c0}] Aliases:map[]}"
	Nov 19 02:01:08 addons-167289 crio[778]: time="2025-11-19T02:01:08.522962114Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-hxtm2 for CNI network kindnet (type=ptp)"
	Nov 19 02:01:08 addons-167289 crio[778]: time="2025-11-19T02:01:08.523885292Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 19 02:01:08 addons-167289 crio[778]: time="2025-11-19T02:01:08.525036344Z" level=info msg="Ran pod sandbox ee47d24c6075287730d8359d401fb1181edcc03057ff8051010085948e9cf7a9 with infra container: default/hello-world-app-5d498dc89-hxtm2/POD" id=7e53a58a-4a31-46ff-b8a9-cf26f6eb8dfa name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 02:01:08 addons-167289 crio[778]: time="2025-11-19T02:01:08.526237598Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=10633782-44b3-42a9-999f-a5be7fde6acf name=/runtime.v1.ImageService/ImageStatus
	Nov 19 02:01:08 addons-167289 crio[778]: time="2025-11-19T02:01:08.526366794Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=10633782-44b3-42a9-999f-a5be7fde6acf name=/runtime.v1.ImageService/ImageStatus
	Nov 19 02:01:08 addons-167289 crio[778]: time="2025-11-19T02:01:08.526413722Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=10633782-44b3-42a9-999f-a5be7fde6acf name=/runtime.v1.ImageService/ImageStatus
	Nov 19 02:01:08 addons-167289 crio[778]: time="2025-11-19T02:01:08.52704824Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=f31f2baf-3c7f-48c9-b346-567077fd7784 name=/runtime.v1.ImageService/PullImage
	Nov 19 02:01:08 addons-167289 crio[778]: time="2025-11-19T02:01:08.531864178Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Nov 19 02:01:09 addons-167289 crio[778]: time="2025-11-19T02:01:09.403382491Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86" id=f31f2baf-3c7f-48c9-b346-567077fd7784 name=/runtime.v1.ImageService/PullImage
	Nov 19 02:01:09 addons-167289 crio[778]: time="2025-11-19T02:01:09.403853694Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=75310d5d-04d5-4ade-8b4c-0f252f28167b name=/runtime.v1.ImageService/ImageStatus
	Nov 19 02:01:09 addons-167289 crio[778]: time="2025-11-19T02:01:09.407428054Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=75e03a82-4783-4833-a272-62949772ab44 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 02:01:09 addons-167289 crio[778]: time="2025-11-19T02:01:09.410958054Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-hxtm2/hello-world-app" id=b419c2e2-adcf-45c7-aac1-4e4891c5e41e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 02:01:09 addons-167289 crio[778]: time="2025-11-19T02:01:09.411053982Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:01:09 addons-167289 crio[778]: time="2025-11-19T02:01:09.415966572Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:01:09 addons-167289 crio[778]: time="2025-11-19T02:01:09.416171852Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/0edea0f368a277825ae2b3c3207dce73f817dbc059365bc7eb9ef19f8f5dce13/merged/etc/passwd: no such file or directory"
	Nov 19 02:01:09 addons-167289 crio[778]: time="2025-11-19T02:01:09.416202813Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/0edea0f368a277825ae2b3c3207dce73f817dbc059365bc7eb9ef19f8f5dce13/merged/etc/group: no such file or directory"
	Nov 19 02:01:09 addons-167289 crio[778]: time="2025-11-19T02:01:09.416487518Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:01:09 addons-167289 crio[778]: time="2025-11-19T02:01:09.458043944Z" level=info msg="Created container 64e810260aee33ce689e4b1ac743e73d187d791f40c1904c360cac592da3a234: default/hello-world-app-5d498dc89-hxtm2/hello-world-app" id=b419c2e2-adcf-45c7-aac1-4e4891c5e41e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 02:01:09 addons-167289 crio[778]: time="2025-11-19T02:01:09.458617071Z" level=info msg="Starting container: 64e810260aee33ce689e4b1ac743e73d187d791f40c1904c360cac592da3a234" id=9536ed89-8e3a-4045-81eb-89b24cc34c00 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 02:01:09 addons-167289 crio[778]: time="2025-11-19T02:01:09.460792825Z" level=info msg="Started container" PID=9777 containerID=64e810260aee33ce689e4b1ac743e73d187d791f40c1904c360cac592da3a234 description=default/hello-world-app-5d498dc89-hxtm2/hello-world-app id=9536ed89-8e3a-4045-81eb-89b24cc34c00 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ee47d24c6075287730d8359d401fb1181edcc03057ff8051010085948e9cf7a9
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	64e810260aee3       docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86                                        Less than a second ago   Running             hello-world-app                          0                   ee47d24c60752       hello-world-app-5d498dc89-hxtm2            default
	6bb674f78899c       docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605                             2 minutes ago            Running             registry-creds                           0                   3084fb409ce3d       registry-creds-764b6fb674-85l2k            kube-system
	ee8d609830285       docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7                                              2 minutes ago            Running             nginx                                    0                   078aef04ff3ef       nginx                                      default
	f400e2b2f3408       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          2 minutes ago            Running             busybox                                  0                   d47ff407d14ae       busybox                                    default
	2b3c875b37c34       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          2 minutes ago            Running             csi-snapshotter                          0                   17fd98ac962b9       csi-hostpathplugin-m4svl                   kube-system
	0057cb6b6d59c       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          2 minutes ago            Running             csi-provisioner                          0                   17fd98ac962b9       csi-hostpathplugin-m4svl                   kube-system
	387526d34b521       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            2 minutes ago            Running             liveness-probe                           0                   17fd98ac962b9       csi-hostpathplugin-m4svl                   kube-system
	559ecf78b7f29       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 2 minutes ago            Running             gcp-auth                                 0                   b97d86d0fe88e       gcp-auth-78565c9fb4-6lrls                  gcp-auth
	f44d066a2880c       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           2 minutes ago            Running             hostpath                                 0                   17fd98ac962b9       csi-hostpathplugin-m4svl                   kube-system
	a1f91b84fd835       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:9a12b3c1d155bb081ff408a9b6c1cec18573c967e0c3917225b81ffe11c0b7f2                            2 minutes ago            Running             gadget                                   0                   a94a5c3a36bac       gadget-qm258                               gadget
	d46baa577b02a       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                2 minutes ago            Running             node-driver-registrar                    0                   17fd98ac962b9       csi-hostpathplugin-m4svl                   kube-system
	1842b647980c9       registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27                             2 minutes ago            Running             controller                               0                   9683973b7fa45       ingress-nginx-controller-6c8bf45fb-89mcj   ingress-nginx
	3e7307111a0a7       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              2 minutes ago            Running             registry-proxy                           0                   3fd0bb1826b44       registry-proxy-7s98h                       kube-system
	e4525045db437       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     2 minutes ago            Running             amd-gpu-device-plugin                    0                   9486566ca191c       amd-gpu-device-plugin-cmmr7                kube-system
	320316320c36a       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      2 minutes ago            Running             volume-snapshot-controller               0                   9c7d9458f082c       snapshot-controller-7d9fbc56b8-qfskz       kube-system
	63f0ac7c6e1d6       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   2 minutes ago            Exited              patch                                    0                   c6461a47f85ea       ingress-nginx-admission-patch-mq2v2        ingress-nginx
	77230f6072332       nvcr.io/nvidia/k8s-device-plugin@sha256:20db699f1480b6f37423cab909e9c6df5a4fdbd981b405e0d72f00a86fee5100                                     2 minutes ago            Running             nvidia-device-plugin-ctr                 0                   4fcb6eb4e7c9b       nvidia-device-plugin-daemonset-sb8hx       kube-system
	08a4a837401d8       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             3 minutes ago            Running             local-path-provisioner                   0                   155a009cec772       local-path-provisioner-648f6765c9-sjqfv    local-path-storage
	4c4521da22d2e       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago            Running             volume-snapshot-controller               0                   d811783c895d2       snapshot-controller-7d9fbc56b8-q5bjz       kube-system
	6c5d7a569a83a       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   3 minutes ago            Running             csi-external-health-monitor-controller   0                   17fd98ac962b9       csi-hostpathplugin-m4svl                   kube-system
	c45598982d3b3       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              3 minutes ago            Running             csi-resizer                              0                   3b1ba8a6f3db0       csi-hostpath-resizer-0                     kube-system
	fc07b5bfc1438       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             3 minutes ago            Running             csi-attacher                             0                   c9152a04f0679       csi-hostpath-attacher-0                    kube-system
	d1cb956fc89aa       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              3 minutes ago            Running             yakd                                     0                   c964bc05a1b7c       yakd-dashboard-5ff678cb9-lfwjh             yakd-dashboard
	f8b7afaf360ea       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   3 minutes ago            Exited              create                                   0                   02971c980d802       ingress-nginx-admission-create-7868s       ingress-nginx
	ee1592f353982       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        3 minutes ago            Running             metrics-server                           0                   f3cbb42e42ced       metrics-server-85b7d694d7-j62rx            kube-system
	139e05f21703a       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           3 minutes ago            Running             registry                                 0                   17556ddd5ea3f       registry-6b586f9694-fvk8h                  kube-system
	ffeca55fd50eb       gcr.io/cloud-spanner-emulator/emulator@sha256:7360f5c5ff4b89d75592d9585fc2d59d207b08ccf262a84edfe79ee0613a7099                               3 minutes ago            Running             cloud-spanner-emulator                   0                   d66fe0f14beb7       cloud-spanner-emulator-6f9fcf858b-2s48m    default
	28a0d1d0eb9de       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               3 minutes ago            Running             minikube-ingress-dns                     0                   fe5bbac44845e       kube-ingress-dns-minikube                  kube-system
	2d72765f224ed       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             3 minutes ago            Running             storage-provisioner                      0                   e4aa1d9caef11       storage-provisioner                        kube-system
	4f2a8fdefa3a9       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             3 minutes ago            Running             coredns                                  0                   de48e2240e120       coredns-66bc5c9577-xb5hd                   kube-system
	fc26589821a5b       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             4 minutes ago            Running             kindnet-cni                              0                   d725d176769a0       kindnet-cf2ws                              kube-system
	76265018a97b0       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             4 minutes ago            Running             kube-proxy                               0                   59ec2f47a03ed       kube-proxy-lrvxh                           kube-system
	2c19d6084be53       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             4 minutes ago            Running             kube-apiserver                           0                   a8129147aa62b       kube-apiserver-addons-167289               kube-system
	caf07801af8b3       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             4 minutes ago            Running             kube-controller-manager                  0                   833dcc9e71021       kube-controller-manager-addons-167289      kube-system
	32f9d499f63a2       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             4 minutes ago            Running             kube-scheduler                           0                   43bd61d555e0f       kube-scheduler-addons-167289               kube-system
	c9553756abec4       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             4 minutes ago            Running             etcd                                     0                   1476ade767365       etcd-addons-167289                         kube-system
	
	
	==> coredns [4f2a8fdefa3a921e57d39ce31bcba0663aef48e6cc812fddf8202cddb72408bc] <==
	[INFO] 10.244.0.22:43527 - 49723 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.00507724s
	[INFO] 10.244.0.22:44622 - 5488 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005057779s
	[INFO] 10.244.0.22:55446 - 28720 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005287485s
	[INFO] 10.244.0.22:38301 - 1892 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005165863s
	[INFO] 10.244.0.22:51183 - 5397 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.007314785s
	[INFO] 10.244.0.22:58154 - 54417 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000755726s
	[INFO] 10.244.0.22:60816 - 43483 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.00112143s
	[INFO] 10.244.0.25:47235 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000235788s
	[INFO] 10.244.0.25:60781 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000162436s
	[INFO] 10.244.0.26:56354 - 62650 "AAAA IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000237147s
	[INFO] 10.244.0.26:34648 - 23780 "A IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000314967s
	[INFO] 10.244.0.26:45610 - 59385 "AAAA IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000121851s
	[INFO] 10.244.0.26:48074 - 47957 "A IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000112964s
	[INFO] 10.244.0.26:43302 - 13391 "A IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000133961s
	[INFO] 10.244.0.26:53247 - 45107 "AAAA IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000180594s
	[INFO] 10.244.0.26:57519 - 32080 "A IN accounts.google.com.local. udp 43 false 512" NXDOMAIN qr,rd,ra 43 0.002872864s
	[INFO] 10.244.0.26:60491 - 34557 "AAAA IN accounts.google.com.local. udp 43 false 512" NXDOMAIN qr,rd,ra 43 0.00595025s
	[INFO] 10.244.0.26:54658 - 41418 "AAAA IN accounts.google.com.us-central1-a.c.k8s-minikube.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 185 0.004287048s
	[INFO] 10.244.0.26:40055 - 4650 "A IN accounts.google.com.us-central1-a.c.k8s-minikube.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 185 0.005044573s
	[INFO] 10.244.0.26:60523 - 17947 "A IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.004068467s
	[INFO] 10.244.0.26:54758 - 17761 "AAAA IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.005016128s
	[INFO] 10.244.0.26:60255 - 18038 "AAAA IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.004479818s
	[INFO] 10.244.0.26:43592 - 8485 "A IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.004730924s
	[INFO] 10.244.0.26:55236 - 2891 "AAAA IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 84 0.001593866s
	[INFO] 10.244.0.26:44416 - 48029 "A IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 72 0.001653001s
	
	
	==> describe nodes <==
	Name:               addons-167289
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-167289
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ebd49efe8b7d44f89fc96e21beffcd64dfee8277
	                    minikube.k8s.io/name=addons-167289
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T01_57_02_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-167289
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-167289"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 01:56:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-167289
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 02:01:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 02:00:35 +0000   Wed, 19 Nov 2025 01:56:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 02:00:35 +0000   Wed, 19 Nov 2025 01:56:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 02:00:35 +0000   Wed, 19 Nov 2025 01:56:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 02:00:35 +0000   Wed, 19 Nov 2025 01:57:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-167289
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863340Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863340Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf10fb2f940d419c1d138723691cfee8
	  System UUID:                78702d16-7ec5-4b22-9678-f0ef333e8730
	  Boot ID:                    b74e7f3d-91af-4c1b-82f6-1745dfd2e678
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (29 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m41s
	  default                     cloud-spanner-emulator-6f9fcf858b-2s48m     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m2s
	  default                     hello-world-app-5d498dc89-hxtm2             0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m27s
	  gadget                      gadget-qm258                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m1s
	  gcp-auth                    gcp-auth-78565c9fb4-6lrls                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m54s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-89mcj    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         4m1s
	  kube-system                 amd-gpu-device-plugin-cmmr7                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m21s
	  kube-system                 coredns-66bc5c9577-xb5hd                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     4m2s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m1s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m1s
	  kube-system                 csi-hostpathplugin-m4svl                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m21s
	  kube-system                 etcd-addons-167289                          100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         4m8s
	  kube-system                 kindnet-cf2ws                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      4m3s
	  kube-system                 kube-apiserver-addons-167289                250m (3%)     0 (0%)      0 (0%)           0 (0%)         4m8s
	  kube-system                 kube-controller-manager-addons-167289       200m (2%)     0 (0%)      0 (0%)           0 (0%)         4m8s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m1s
	  kube-system                 kube-proxy-lrvxh                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m3s
	  kube-system                 kube-scheduler-addons-167289                100m (1%)     0 (0%)      0 (0%)           0 (0%)         4m8s
	  kube-system                 metrics-server-85b7d694d7-j62rx             100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         4m1s
	  kube-system                 nvidia-device-plugin-daemonset-sb8hx        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m21s
	  kube-system                 registry-6b586f9694-fvk8h                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m1s
	  kube-system                 registry-creds-764b6fb674-85l2k             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m2s
	  kube-system                 registry-proxy-7s98h                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m21s
	  kube-system                 snapshot-controller-7d9fbc56b8-q5bjz        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m
	  kube-system                 snapshot-controller-7d9fbc56b8-qfskz        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m1s
	  local-path-storage          local-path-provisioner-648f6765c9-sjqfv     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m1s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-lfwjh              0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     4m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m1s                   kube-proxy       
	  Normal  Starting                 4m13s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m13s (x8 over 4m13s)  kubelet          Node addons-167289 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m13s (x8 over 4m13s)  kubelet          Node addons-167289 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m13s (x8 over 4m13s)  kubelet          Node addons-167289 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m8s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m8s                   kubelet          Node addons-167289 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m8s                   kubelet          Node addons-167289 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m8s                   kubelet          Node addons-167289 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m3s                   node-controller  Node addons-167289 event: Registered Node addons-167289 in Controller
	  Normal  NodeReady                3m21s                  kubelet          Node addons-167289 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.087110] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023778] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.840612] kauditd_printk_skb: 47 callbacks suppressed
	[Nov19 01:58] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c6 2a 10 38 98 ce 7e af 25 93 50 8b 08 00
	[  +1.036368] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c6 2a 10 38 98 ce 7e af 25 93 50 8b 08 00
	[  +1.023894] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: c6 2a 10 38 98 ce 7e af 25 93 50 8b 08 00
	[  +1.023887] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: c6 2a 10 38 98 ce 7e af 25 93 50 8b 08 00
	[  +1.023877] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: c6 2a 10 38 98 ce 7e af 25 93 50 8b 08 00
	[  +1.023887] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: c6 2a 10 38 98 ce 7e af 25 93 50 8b 08 00
	[  +2.047754] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c6 2a 10 38 98 ce 7e af 25 93 50 8b 08 00
	[Nov19 01:59] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c6 2a 10 38 98 ce 7e af 25 93 50 8b 08 00
	[  +8.383180] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: c6 2a 10 38 98 ce 7e af 25 93 50 8b 08 00
	[ +16.382291] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: c6 2a 10 38 98 ce 7e af 25 93 50 8b 08 00
	[ +32.252687] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: c6 2a 10 38 98 ce 7e af 25 93 50 8b 08 00
	
	
	==> etcd [c9553756abec4455cfb96c8ccb4ac24ebed143c24ffb9c08df7706240e482bac] <==
	{"level":"warn","ts":"2025-11-19T01:56:58.597625Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T01:56:58.603973Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T01:56:58.610027Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T01:56:58.616271Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T01:56:58.623025Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T01:56:58.629701Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38478","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T01:56:58.636152Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T01:56:58.641510Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T01:56:58.657797Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T01:56:58.663457Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T01:56:58.669417Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T01:56:58.715513Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T01:57:09.473180Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T01:57:09.479299Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T01:57:36.112333Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T01:57:36.119205Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T01:57:36.141753Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57256","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-19T01:58:02.626611Z","caller":"traceutil/trace.go:172","msg":"trace[1512337794] transaction","detail":"{read_only:false; response_revision:1039; number_of_response:1; }","duration":"123.6037ms","start":"2025-11-19T01:58:02.502989Z","end":"2025-11-19T01:58:02.626592Z","steps":["trace[1512337794] 'process raft request'  (duration: 123.417566ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T01:58:15.949802Z","caller":"traceutil/trace.go:172","msg":"trace[997775731] transaction","detail":"{read_only:false; response_revision:1159; number_of_response:1; }","duration":"158.906166ms","start":"2025-11-19T01:58:15.790877Z","end":"2025-11-19T01:58:15.949783Z","steps":["trace[997775731] 'process raft request'  (duration: 101.777171ms)","trace[997775731] 'compare'  (duration: 57.038639ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-19T01:58:15.972701Z","caller":"traceutil/trace.go:172","msg":"trace[1121739427] transaction","detail":"{read_only:false; response_revision:1160; number_of_response:1; }","duration":"150.229829ms","start":"2025-11-19T01:58:15.822455Z","end":"2025-11-19T01:58:15.972685Z","steps":["trace[1121739427] 'process raft request'  (duration: 150.069565ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T01:58:17.649810Z","caller":"traceutil/trace.go:172","msg":"trace[1376741424] transaction","detail":"{read_only:false; response_revision:1167; number_of_response:1; }","duration":"134.665984ms","start":"2025-11-19T01:58:17.515123Z","end":"2025-11-19T01:58:17.649789Z","steps":["trace[1376741424] 'process raft request'  (duration: 71.983384ms)","trace[1376741424] 'compare'  (duration: 62.468388ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-19T01:58:17.936690Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"126.467719ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/resourceclaimtemplates\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-19T01:58:17.936771Z","caller":"traceutil/trace.go:172","msg":"trace[1102516106] range","detail":"{range_begin:/registry/resourceclaimtemplates; range_end:; response_count:0; response_revision:1167; }","duration":"126.563777ms","start":"2025-11-19T01:58:17.810190Z","end":"2025-11-19T01:58:17.936754Z","steps":["trace[1102516106] 'range keys from in-memory index tree'  (duration: 126.405737ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-19T01:58:20.825564Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"158.076484ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128041412979033635 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.49.2\" mod_revision:1092 > success:<request_put:<key:\"/registry/masterleases/192.168.49.2\" value_size:65 lease:8128041412979033631 >> failure:<request_range:<key:\"/registry/masterleases/192.168.49.2\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-19T01:58:20.825661Z","caller":"traceutil/trace.go:172","msg":"trace[477823980] transaction","detail":"{read_only:false; response_revision:1187; number_of_response:1; }","duration":"163.341724ms","start":"2025-11-19T01:58:20.662300Z","end":"2025-11-19T01:58:20.825642Z","steps":["trace[477823980] 'compare'  (duration: 157.986528ms)"],"step_count":1}
	
	
	==> gcp-auth [559ecf78b7f29981481dbedebcc741a683d8c35abcce59cef31007660aa35951] <==
	2025/11/19 01:58:23 GCP Auth Webhook started!
	2025/11/19 01:58:28 Ready to marshal response ...
	2025/11/19 01:58:28 Ready to write response ...
	2025/11/19 01:58:28 Ready to marshal response ...
	2025/11/19 01:58:28 Ready to write response ...
	2025/11/19 01:58:28 Ready to marshal response ...
	2025/11/19 01:58:28 Ready to write response ...
	2025/11/19 01:58:42 Ready to marshal response ...
	2025/11/19 01:58:42 Ready to write response ...
	2025/11/19 01:58:46 Ready to marshal response ...
	2025/11/19 01:58:46 Ready to write response ...
	2025/11/19 01:58:54 Ready to marshal response ...
	2025/11/19 01:58:54 Ready to write response ...
	2025/11/19 01:58:54 Ready to marshal response ...
	2025/11/19 01:58:54 Ready to write response ...
	2025/11/19 01:59:01 Ready to marshal response ...
	2025/11/19 01:59:01 Ready to write response ...
	2025/11/19 01:59:02 Ready to marshal response ...
	2025/11/19 01:59:02 Ready to write response ...
	2025/11/19 01:59:17 Ready to marshal response ...
	2025/11/19 01:59:17 Ready to write response ...
	2025/11/19 02:01:08 Ready to marshal response ...
	2025/11/19 02:01:08 Ready to write response ...
	
	
	==> kernel <==
	 02:01:09 up 43 min,  0 user,  load average: 0.65, 0.77, 0.38
	Linux addons-167289 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [fc26589821a5be523a8e87f633779e2f88a1591ee75b009035709790a1af3b55] <==
	I1119 01:59:08.473004       1 main.go:301] handling current node
	I1119 01:59:18.472280       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 01:59:18.472329       1 main.go:301] handling current node
	I1119 01:59:28.474611       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 01:59:28.474638       1 main.go:301] handling current node
	I1119 01:59:38.472891       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 01:59:38.472923       1 main.go:301] handling current node
	I1119 01:59:48.474707       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 01:59:48.474736       1 main.go:301] handling current node
	I1119 01:59:58.476424       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 01:59:58.476467       1 main.go:301] handling current node
	I1119 02:00:08.481527       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 02:00:08.481556       1 main.go:301] handling current node
	I1119 02:00:18.476035       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 02:00:18.476068       1 main.go:301] handling current node
	I1119 02:00:28.475315       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 02:00:28.475351       1 main.go:301] handling current node
	I1119 02:00:38.475124       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 02:00:38.475168       1 main.go:301] handling current node
	I1119 02:00:48.472385       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 02:00:48.472424       1 main.go:301] handling current node
	I1119 02:00:58.474154       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 02:00:58.474184       1 main.go:301] handling current node
	I1119 02:01:08.472643       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 02:01:08.472679       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2c19d6084be53e03b3e3ac1e879db7b855bd9d15968238b086b01debd7d271ef] <==
	W1119 01:57:36.141729       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1119 01:57:48.899301       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.110.85.139:443: connect: connection refused
	E1119 01:57:48.899342       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.110.85.139:443: connect: connection refused" logger="UnhandledError"
	W1119 01:57:48.899525       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.110.85.139:443: connect: connection refused
	E1119 01:57:48.899560       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.110.85.139:443: connect: connection refused" logger="UnhandledError"
	W1119 01:57:48.915201       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.110.85.139:443: connect: connection refused
	E1119 01:57:48.915235       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.110.85.139:443: connect: connection refused" logger="UnhandledError"
	W1119 01:57:48.922256       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.110.85.139:443: connect: connection refused
	E1119 01:57:48.922375       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.110.85.139:443: connect: connection refused" logger="UnhandledError"
	W1119 01:58:01.718969       1 handler_proxy.go:99] no RequestInfo found in the context
	E1119 01:58:01.719049       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1119 01:58:01.719586       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.226.170:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.226.170:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.226.170:443: connect: connection refused" logger="UnhandledError"
	E1119 01:58:01.721225       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.226.170:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.226.170:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.226.170:443: connect: connection refused" logger="UnhandledError"
	E1119 01:58:01.726641       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.226.170:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.226.170:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.226.170:443: connect: connection refused" logger="UnhandledError"
	E1119 01:58:01.747807       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.226.170:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.226.170:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.226.170:443: connect: connection refused" logger="UnhandledError"
	I1119 01:58:01.822693       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1119 01:58:36.406722       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:41016: use of closed network connection
	E1119 01:58:36.547761       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:41042: use of closed network connection
	I1119 01:58:42.286604       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1119 01:58:42.473207       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.102.236.63"}
	I1119 01:59:13.752394       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1119 02:01:08.273933       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.110.109.30"}
	
	
	==> kube-controller-manager [caf07801af8b3aabb5593451ab4fa658a97ce71604a86b5c1c38b3f9bde7e597] <==
	I1119 01:57:06.090804       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1119 01:57:06.090834       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1119 01:57:06.090866       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1119 01:57:06.090944       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1119 01:57:06.090948       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1119 01:57:06.090953       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1119 01:57:06.091139       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="addons-167289"
	I1119 01:57:06.091192       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1119 01:57:06.091232       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1119 01:57:06.093197       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1119 01:57:06.093278       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1119 01:57:06.095301       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 01:57:06.098926       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1119 01:57:06.101158       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 01:57:06.104392       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1119 01:57:06.108571       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1119 01:57:06.118211       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1119 01:57:36.105535       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1119 01:57:36.105689       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1119 01:57:36.105751       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1119 01:57:36.126807       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1119 01:57:36.130049       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1119 01:57:36.206016       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 01:57:36.230463       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 01:57:51.096690       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [76265018a97b054599e84272f03b9d9c1514d776b5f51869b14493d4baae8651] <==
	I1119 01:57:08.098911       1 server_linux.go:53] "Using iptables proxy"
	I1119 01:57:08.315948       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 01:57:08.417144       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 01:57:08.417261       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1119 01:57:08.417370       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 01:57:08.461402       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 01:57:08.461484       1 server_linux.go:132] "Using iptables Proxier"
	I1119 01:57:08.468358       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 01:57:08.477256       1 server.go:527] "Version info" version="v1.34.1"
	I1119 01:57:08.477287       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 01:57:08.478658       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 01:57:08.480767       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 01:57:08.478873       1 config.go:200] "Starting service config controller"
	I1119 01:57:08.480885       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 01:57:08.478892       1 config.go:309] "Starting node config controller"
	I1119 01:57:08.481007       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 01:57:08.481023       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 01:57:08.479448       1 config.go:106] "Starting endpoint slice config controller"
	I1119 01:57:08.481061       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 01:57:08.581045       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1119 01:57:08.581178       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1119 01:57:08.582223       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [32f9d499f63a2a4e0bec492e9ab5fd077b296adbf502d37c0814b310de798245] <==
	E1119 01:56:59.106708       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1119 01:56:59.106853       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1119 01:56:59.106868       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1119 01:56:59.106927       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1119 01:56:59.107042       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1119 01:56:59.107044       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1119 01:56:59.107307       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1119 01:56:59.107327       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1119 01:56:59.107367       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1119 01:56:59.107375       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1119 01:56:59.107642       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1119 01:56:59.107672       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1119 01:56:59.107693       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1119 01:56:59.107769       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1119 01:56:59.107885       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1119 01:56:59.107965       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1119 01:56:59.920998       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1119 01:56:59.961106       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1119 01:56:59.970002       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1119 01:57:00.146831       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1119 01:57:00.258122       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1119 01:57:00.287179       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1119 01:57:00.287895       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1119 01:57:00.375846       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1119 01:57:03.505076       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 19 01:59:25 addons-167289 kubelet[1293]: I1119 01:59:25.167467    1293 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-0a7fa22b-3848-4e16-80b7-26bf2d62f79d\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^5ab5c36d-c4eb-11f0-9ab8-aa2874d92988\") on node \"addons-167289\" "
	Nov 19 01:59:25 addons-167289 kubelet[1293]: E1119 01:59:25.172832    1293 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/hostpath.csi.k8s.io^5ab5c36d-c4eb-11f0-9ab8-aa2874d92988 podName: nodeName:}" failed. No retries permitted until 2025-11-19 01:59:25.672809605 +0000 UTC m=+144.209564025 (durationBeforeRetry 500ms). Error: UnmountDevice failed for volume "pvc-0a7fa22b-3848-4e16-80b7-26bf2d62f79d" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^5ab5c36d-c4eb-11f0-9ab8-aa2874d92988") on node "addons-167289" : kubernetes.io/csi: attacher.UnmountDevice failed: rpc error: code = NotFound desc = volume id 5ab5c36d-c4eb-11f0-9ab8-aa2874d92988 does not exist in the volumes list
	Nov 19 01:59:25 addons-167289 kubelet[1293]: I1119 01:59:25.542988    1293 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="310f8e5a-62ba-486e-a734-6d5ba96c2a4b" path="/var/lib/kubelet/pods/310f8e5a-62ba-486e-a734-6d5ba96c2a4b/volumes"
	Nov 19 01:59:25 addons-167289 kubelet[1293]: I1119 01:59:25.771676    1293 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-0a7fa22b-3848-4e16-80b7-26bf2d62f79d\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^5ab5c36d-c4eb-11f0-9ab8-aa2874d92988\") on node \"addons-167289\" "
	Nov 19 01:59:25 addons-167289 kubelet[1293]: E1119 01:59:25.776593    1293 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/hostpath.csi.k8s.io^5ab5c36d-c4eb-11f0-9ab8-aa2874d92988 podName: nodeName:}" failed. No retries permitted until 2025-11-19 01:59:26.776573668 +0000 UTC m=+145.313328085 (durationBeforeRetry 1s). Error: UnmountDevice failed for volume "pvc-0a7fa22b-3848-4e16-80b7-26bf2d62f79d" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^5ab5c36d-c4eb-11f0-9ab8-aa2874d92988") on node "addons-167289" : kubernetes.io/csi: attacher.UnmountDevice failed: rpc error: code = NotFound desc = volume id 5ab5c36d-c4eb-11f0-9ab8-aa2874d92988 does not exist in the volumes list
	Nov 19 01:59:26 addons-167289 kubelet[1293]: I1119 01:59:26.778930    1293 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-0a7fa22b-3848-4e16-80b7-26bf2d62f79d\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^5ab5c36d-c4eb-11f0-9ab8-aa2874d92988\") on node \"addons-167289\" "
	Nov 19 01:59:26 addons-167289 kubelet[1293]: E1119 01:59:26.782906    1293 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/hostpath.csi.k8s.io^5ab5c36d-c4eb-11f0-9ab8-aa2874d92988 podName: nodeName:}" failed. No retries permitted until 2025-11-19 01:59:28.782891142 +0000 UTC m=+147.319645544 (durationBeforeRetry 2s). Error: UnmountDevice failed for volume "pvc-0a7fa22b-3848-4e16-80b7-26bf2d62f79d" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^5ab5c36d-c4eb-11f0-9ab8-aa2874d92988") on node "addons-167289" : kubernetes.io/csi: attacher.UnmountDevice failed: rpc error: code = NotFound desc = volume id 5ab5c36d-c4eb-11f0-9ab8-aa2874d92988 does not exist in the volumes list
	Nov 19 01:59:28 addons-167289 kubelet[1293]: I1119 01:59:28.792373    1293 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-0a7fa22b-3848-4e16-80b7-26bf2d62f79d\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^5ab5c36d-c4eb-11f0-9ab8-aa2874d92988\") on node \"addons-167289\" "
	Nov 19 01:59:28 addons-167289 kubelet[1293]: E1119 01:59:28.796492    1293 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/hostpath.csi.k8s.io^5ab5c36d-c4eb-11f0-9ab8-aa2874d92988 podName: nodeName:}" failed. No retries permitted until 2025-11-19 01:59:32.796470086 +0000 UTC m=+151.333224506 (durationBeforeRetry 4s). Error: UnmountDevice failed for volume "pvc-0a7fa22b-3848-4e16-80b7-26bf2d62f79d" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^5ab5c36d-c4eb-11f0-9ab8-aa2874d92988") on node "addons-167289" : kubernetes.io/csi: attacher.UnmountDevice failed: rpc error: code = NotFound desc = volume id 5ab5c36d-c4eb-11f0-9ab8-aa2874d92988 does not exist in the volumes list
	Nov 19 01:59:32 addons-167289 kubelet[1293]: I1119 01:59:32.820165    1293 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-0a7fa22b-3848-4e16-80b7-26bf2d62f79d\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^5ab5c36d-c4eb-11f0-9ab8-aa2874d92988\") on node \"addons-167289\" "
	Nov 19 01:59:32 addons-167289 kubelet[1293]: E1119 01:59:32.824310    1293 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/hostpath.csi.k8s.io^5ab5c36d-c4eb-11f0-9ab8-aa2874d92988 podName: nodeName:}" failed. No retries permitted until 2025-11-19 01:59:40.824289665 +0000 UTC m=+159.361044077 (durationBeforeRetry 8s). Error: UnmountDevice failed for volume "pvc-0a7fa22b-3848-4e16-80b7-26bf2d62f79d" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^5ab5c36d-c4eb-11f0-9ab8-aa2874d92988") on node "addons-167289" : kubernetes.io/csi: attacher.UnmountDevice failed: rpc error: code = NotFound desc = volume id 5ab5c36d-c4eb-11f0-9ab8-aa2874d92988 does not exist in the volumes list
	Nov 19 01:59:33 addons-167289 kubelet[1293]: I1119 01:59:33.539739    1293 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-sb8hx" secret="" err="secret \"gcp-auth\" not found"
	Nov 19 01:59:40 addons-167289 kubelet[1293]: I1119 01:59:40.877823    1293 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-0a7fa22b-3848-4e16-80b7-26bf2d62f79d\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^5ab5c36d-c4eb-11f0-9ab8-aa2874d92988\") on node \"addons-167289\" "
	Nov 19 01:59:40 addons-167289 kubelet[1293]: E1119 01:59:40.881817    1293 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/hostpath.csi.k8s.io^5ab5c36d-c4eb-11f0-9ab8-aa2874d92988 podName: nodeName:}" failed. No retries permitted until 2025-11-19 01:59:56.881801482 +0000 UTC m=+175.418555887 (durationBeforeRetry 16s). Error: UnmountDevice failed for volume "pvc-0a7fa22b-3848-4e16-80b7-26bf2d62f79d" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^5ab5c36d-c4eb-11f0-9ab8-aa2874d92988") on node "addons-167289" : kubernetes.io/csi: attacher.UnmountDevice failed: rpc error: code = NotFound desc = volume id 5ab5c36d-c4eb-11f0-9ab8-aa2874d92988 does not exist in the volumes list
	Nov 19 01:59:56 addons-167289 kubelet[1293]: I1119 01:59:56.885843    1293 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-0a7fa22b-3848-4e16-80b7-26bf2d62f79d\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^5ab5c36d-c4eb-11f0-9ab8-aa2874d92988\") on node \"addons-167289\" "
	Nov 19 01:59:56 addons-167289 kubelet[1293]: E1119 01:59:56.889754    1293 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/hostpath.csi.k8s.io^5ab5c36d-c4eb-11f0-9ab8-aa2874d92988 podName: nodeName:}" failed. No retries permitted until 2025-11-19 02:00:28.889734654 +0000 UTC m=+207.426489074 (durationBeforeRetry 32s). Error: UnmountDevice failed for volume "pvc-0a7fa22b-3848-4e16-80b7-26bf2d62f79d" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^5ab5c36d-c4eb-11f0-9ab8-aa2874d92988") on node "addons-167289" : kubernetes.io/csi: attacher.UnmountDevice failed: rpc error: code = NotFound desc = volume id 5ab5c36d-c4eb-11f0-9ab8-aa2874d92988 does not exist in the volumes list
	Nov 19 02:00:01 addons-167289 kubelet[1293]: I1119 02:00:01.567489    1293 scope.go:117] "RemoveContainer" containerID="ea23914e2567bd0c8d4dea498c97034c6ccfeaa34b6fa92377ecd50bff65a32f"
	Nov 19 02:00:01 addons-167289 kubelet[1293]: I1119 02:00:01.576580    1293 scope.go:117] "RemoveContainer" containerID="3a56365de83cdb4ed42dbeac158526236e2795828bbc05d4724d069f3ed3b353"
	Nov 19 02:00:28 addons-167289 kubelet[1293]: I1119 02:00:28.896744    1293 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-0a7fa22b-3848-4e16-80b7-26bf2d62f79d\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^5ab5c36d-c4eb-11f0-9ab8-aa2874d92988\") on node \"addons-167289\" "
	Nov 19 02:00:28 addons-167289 kubelet[1293]: E1119 02:00:28.902818    1293 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/hostpath.csi.k8s.io^5ab5c36d-c4eb-11f0-9ab8-aa2874d92988 podName: nodeName:}" failed. No retries permitted until 2025-11-19 02:01:32.902787955 +0000 UTC m=+271.439542377 (durationBeforeRetry 1m4s). Error: UnmountDevice failed for volume "pvc-0a7fa22b-3848-4e16-80b7-26bf2d62f79d" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^5ab5c36d-c4eb-11f0-9ab8-aa2874d92988") on node "addons-167289" : kubernetes.io/csi: attacher.UnmountDevice failed: rpc error: code = NotFound desc = volume id 5ab5c36d-c4eb-11f0-9ab8-aa2874d92988 does not exist in the volumes list
	Nov 19 02:00:29 addons-167289 kubelet[1293]: I1119 02:00:29.540614    1293 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-cmmr7" secret="" err="secret \"gcp-auth\" not found"
	Nov 19 02:00:34 addons-167289 kubelet[1293]: I1119 02:00:34.540484    1293 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-7s98h" secret="" err="secret \"gcp-auth\" not found"
	Nov 19 02:00:56 addons-167289 kubelet[1293]: I1119 02:00:56.540384    1293 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-sb8hx" secret="" err="secret \"gcp-auth\" not found"
	Nov 19 02:01:08 addons-167289 kubelet[1293]: I1119 02:01:08.269645    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/f365e3f0-ebc6-497d-81ff-b1a6a66c68c3-gcp-creds\") pod \"hello-world-app-5d498dc89-hxtm2\" (UID: \"f365e3f0-ebc6-497d-81ff-b1a6a66c68c3\") " pod="default/hello-world-app-5d498dc89-hxtm2"
	Nov 19 02:01:08 addons-167289 kubelet[1293]: I1119 02:01:08.269745    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6bdt\" (UniqueName: \"kubernetes.io/projected/f365e3f0-ebc6-497d-81ff-b1a6a66c68c3-kube-api-access-z6bdt\") pod \"hello-world-app-5d498dc89-hxtm2\" (UID: \"f365e3f0-ebc6-497d-81ff-b1a6a66c68c3\") " pod="default/hello-world-app-5d498dc89-hxtm2"
	
	
	==> storage-provisioner [2d72765f224ed9ae16aaaf83613ea7845ee7fcd72d8bd4046856b1ae4dcbe2f6] <==
	W1119 02:00:44.124820       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:00:46.127530       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:00:46.133146       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:00:48.136034       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:00:48.140380       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:00:50.142952       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:00:50.146296       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:00:52.149092       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:00:52.154017       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:00:54.156715       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:00:54.160348       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:00:56.162771       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:00:56.166400       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:00:58.168636       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:00:58.171735       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:01:00.174652       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:01:00.179090       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:01:02.181428       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:01:02.184907       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:01:04.187260       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:01:04.191970       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:01:06.194613       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:01:06.199201       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:01:08.201721       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:01:08.205331       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-167289 -n addons-167289
helpers_test.go:269: (dbg) Run:  kubectl --context addons-167289 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-world-app-5d498dc89-hxtm2 ingress-nginx-admission-create-7868s ingress-nginx-admission-patch-mq2v2
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-167289 describe pod hello-world-app-5d498dc89-hxtm2 ingress-nginx-admission-create-7868s ingress-nginx-admission-patch-mq2v2
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-167289 describe pod hello-world-app-5d498dc89-hxtm2 ingress-nginx-admission-create-7868s ingress-nginx-admission-patch-mq2v2: exit status 1 (66.117608ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-hxtm2
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-167289/192.168.49.2
	Start Time:       Wed, 19 Nov 2025 02:01:08 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Running
	IP:               10.244.0.32
	IPs:
	  IP:           10.244.0.32
	Controlled By:  ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   cri-o://64e810260aee33ce689e4b1ac743e73d187d791f40c1904c360cac592da3a234
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Running
	      Started:      Wed, 19 Nov 2025 02:01:09 +0000
	    Ready:          True
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-z6bdt (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       True 
	  ContainersReady             True 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-z6bdt:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  2s    default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-hxtm2 to addons-167289
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"
	  Normal  Pulled     1s    kubelet            Successfully pulled image "docker.io/kicbase/echo-server:1.0" in 880ms (880ms including waiting). Image size: 4944818 bytes.
	  Normal  Created    1s    kubelet            Created container: hello-world-app
	  Normal  Started    1s    kubelet            Started container hello-world-app

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-7868s" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-mq2v2" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-167289 describe pod hello-world-app-5d498dc89-hxtm2 ingress-nginx-admission-create-7868s ingress-nginx-admission-patch-mq2v2: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-167289 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-167289 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (229.760639ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 02:01:10.531584   30360 out.go:360] Setting OutFile to fd 1 ...
	I1119 02:01:10.531877   30360 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:01:10.531887   30360 out.go:374] Setting ErrFile to fd 2...
	I1119 02:01:10.531891   30360 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:01:10.532101   30360 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-11126/.minikube/bin
	I1119 02:01:10.532389   30360 mustload.go:66] Loading cluster: addons-167289
	I1119 02:01:10.532771   30360 config.go:182] Loaded profile config "addons-167289": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:01:10.532788   30360 addons.go:607] checking whether the cluster is paused
	I1119 02:01:10.532886   30360 config.go:182] Loaded profile config "addons-167289": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:01:10.532902   30360 host.go:66] Checking if "addons-167289" exists ...
	I1119 02:01:10.533268   30360 cli_runner.go:164] Run: docker container inspect addons-167289 --format={{.State.Status}}
	I1119 02:01:10.550490   30360 ssh_runner.go:195] Run: systemctl --version
	I1119 02:01:10.550532   30360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-167289
	I1119 02:01:10.567252   30360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/addons-167289/id_rsa Username:docker}
	I1119 02:01:10.660657   30360 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 02:01:10.660733   30360 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 02:01:10.687400   30360 cri.go:89] found id: "6bb674f78899cf132b694502d156929f80addc5f2e093e36d38f505f43b4e6ed"
	I1119 02:01:10.687418   30360 cri.go:89] found id: "2b3c875b37c34a1aaf8e79a105e8e53fae42de67a4a4a839e6099e2e76e3ee93"
	I1119 02:01:10.687421   30360 cri.go:89] found id: "0057cb6b6d59c2d741aceb29df5771b62c0e82207a84f69ac5154387cbd84153"
	I1119 02:01:10.687425   30360 cri.go:89] found id: "387526d34b521aa97915fbe3e7854312807b05167ee255ed3d4dfbf358eb18ab"
	I1119 02:01:10.687427   30360 cri.go:89] found id: "f44d066a2880c3b89fb901e65c68edf6462e6f5ee4704d445d70bab540e140db"
	I1119 02:01:10.687451   30360 cri.go:89] found id: "d46baa577b02a9113e070c4a0480941b3b25bbbcce455137088c83a4b640d69f"
	I1119 02:01:10.687456   30360 cri.go:89] found id: "3e7307111a0a7ff2319df0e4a44e2dfdd6899963934cd8f81e97fe79104558fe"
	I1119 02:01:10.687461   30360 cri.go:89] found id: "e4525045db437311150f979f145e5df2b15dba4a85832f3b40b56d9e95456c85"
	I1119 02:01:10.687465   30360 cri.go:89] found id: "320316320c36a31575ed518280c787f454599b6f6db11a50abd8a2b071eab8ce"
	I1119 02:01:10.687472   30360 cri.go:89] found id: "77230f6072332b89f67e0a13fc3e2f90a73b685df581bca576a4aa98a0393837"
	I1119 02:01:10.687476   30360 cri.go:89] found id: "4c4521da22d2eb06ed45356e3e80a96ea0146646cd996eb249b4381da1a14456"
	I1119 02:01:10.687481   30360 cri.go:89] found id: "6c5d7a569a83aee258230f3e4101efcec68212fb81bd79541a6db05f42d1a635"
	I1119 02:01:10.687488   30360 cri.go:89] found id: "c45598982d3b30077574919aa2f884686b6cc7cef2866a9077b7aaa5b63ec66f"
	I1119 02:01:10.687493   30360 cri.go:89] found id: "fc07b5bfc14386b4ffa6dbdfb46e833fb2891243713de31478929edea09648dc"
	I1119 02:01:10.687498   30360 cri.go:89] found id: "ee1592f353982b5c192b5c5fede23bebda0067235ac78605adf1748bd5b7a544"
	I1119 02:01:10.687503   30360 cri.go:89] found id: "139e05f21703a92685b5f507816a8e38f914726f6ef0aa1b6cace7a7821c19fa"
	I1119 02:01:10.687505   30360 cri.go:89] found id: "28a0d1d0eb9de3e99faff9a60e034a22f2550e1d457b6e8c119f0069bb8c2dfb"
	I1119 02:01:10.687510   30360 cri.go:89] found id: "2d72765f224ed9ae16aaaf83613ea7845ee7fcd72d8bd4046856b1ae4dcbe2f6"
	I1119 02:01:10.687512   30360 cri.go:89] found id: "4f2a8fdefa3a921e57d39ce31bcba0663aef48e6cc812fddf8202cddb72408bc"
	I1119 02:01:10.687514   30360 cri.go:89] found id: "fc26589821a5be523a8e87f633779e2f88a1591ee75b009035709790a1af3b55"
	I1119 02:01:10.687517   30360 cri.go:89] found id: "76265018a97b054599e84272f03b9d9c1514d776b5f51869b14493d4baae8651"
	I1119 02:01:10.687519   30360 cri.go:89] found id: "2c19d6084be53e03b3e3ac1e879db7b855bd9d15968238b086b01debd7d271ef"
	I1119 02:01:10.687521   30360 cri.go:89] found id: "caf07801af8b3aabb5593451ab4fa658a97ce71604a86b5c1c38b3f9bde7e597"
	I1119 02:01:10.687524   30360 cri.go:89] found id: "32f9d499f63a2a4e0bec492e9ab5fd077b296adbf502d37c0814b310de798245"
	I1119 02:01:10.687526   30360 cri.go:89] found id: "c9553756abec4455cfb96c8ccb4ac24ebed143c24ffb9c08df7706240e482bac"
	I1119 02:01:10.687529   30360 cri.go:89] found id: ""
	I1119 02:01:10.687567   30360 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 02:01:10.700357   30360 out.go:203] 
	W1119 02:01:10.701425   30360 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:01:10Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:01:10Z" level=error msg="open /run/runc: no such file or directory"
	
	W1119 02:01:10.701455   30360 out.go:285] * 
	* 
	W1119 02:01:10.704507   30360 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1119 02:01:10.705650   30360 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-amd64 -p addons-167289 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-167289 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-167289 addons disable ingress --alsologtostderr -v=1: exit status 11 (227.197044ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 02:01:10.760281   30425 out.go:360] Setting OutFile to fd 1 ...
	I1119 02:01:10.760560   30425 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:01:10.760569   30425 out.go:374] Setting ErrFile to fd 2...
	I1119 02:01:10.760573   30425 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:01:10.760724   30425 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-11126/.minikube/bin
	I1119 02:01:10.760967   30425 mustload.go:66] Loading cluster: addons-167289
	I1119 02:01:10.761279   30425 config.go:182] Loaded profile config "addons-167289": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:01:10.761293   30425 addons.go:607] checking whether the cluster is paused
	I1119 02:01:10.761371   30425 config.go:182] Loaded profile config "addons-167289": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:01:10.761381   30425 host.go:66] Checking if "addons-167289" exists ...
	I1119 02:01:10.761710   30425 cli_runner.go:164] Run: docker container inspect addons-167289 --format={{.State.Status}}
	I1119 02:01:10.778500   30425 ssh_runner.go:195] Run: systemctl --version
	I1119 02:01:10.778540   30425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-167289
	I1119 02:01:10.794671   30425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/addons-167289/id_rsa Username:docker}
	I1119 02:01:10.886356   30425 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 02:01:10.886475   30425 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 02:01:10.913892   30425 cri.go:89] found id: "6bb674f78899cf132b694502d156929f80addc5f2e093e36d38f505f43b4e6ed"
	I1119 02:01:10.913921   30425 cri.go:89] found id: "2b3c875b37c34a1aaf8e79a105e8e53fae42de67a4a4a839e6099e2e76e3ee93"
	I1119 02:01:10.913925   30425 cri.go:89] found id: "0057cb6b6d59c2d741aceb29df5771b62c0e82207a84f69ac5154387cbd84153"
	I1119 02:01:10.913928   30425 cri.go:89] found id: "387526d34b521aa97915fbe3e7854312807b05167ee255ed3d4dfbf358eb18ab"
	I1119 02:01:10.913931   30425 cri.go:89] found id: "f44d066a2880c3b89fb901e65c68edf6462e6f5ee4704d445d70bab540e140db"
	I1119 02:01:10.913934   30425 cri.go:89] found id: "d46baa577b02a9113e070c4a0480941b3b25bbbcce455137088c83a4b640d69f"
	I1119 02:01:10.913937   30425 cri.go:89] found id: "3e7307111a0a7ff2319df0e4a44e2dfdd6899963934cd8f81e97fe79104558fe"
	I1119 02:01:10.913939   30425 cri.go:89] found id: "e4525045db437311150f979f145e5df2b15dba4a85832f3b40b56d9e95456c85"
	I1119 02:01:10.913941   30425 cri.go:89] found id: "320316320c36a31575ed518280c787f454599b6f6db11a50abd8a2b071eab8ce"
	I1119 02:01:10.913946   30425 cri.go:89] found id: "77230f6072332b89f67e0a13fc3e2f90a73b685df581bca576a4aa98a0393837"
	I1119 02:01:10.913948   30425 cri.go:89] found id: "4c4521da22d2eb06ed45356e3e80a96ea0146646cd996eb249b4381da1a14456"
	I1119 02:01:10.913950   30425 cri.go:89] found id: "6c5d7a569a83aee258230f3e4101efcec68212fb81bd79541a6db05f42d1a635"
	I1119 02:01:10.913953   30425 cri.go:89] found id: "c45598982d3b30077574919aa2f884686b6cc7cef2866a9077b7aaa5b63ec66f"
	I1119 02:01:10.913962   30425 cri.go:89] found id: "fc07b5bfc14386b4ffa6dbdfb46e833fb2891243713de31478929edea09648dc"
	I1119 02:01:10.913965   30425 cri.go:89] found id: "ee1592f353982b5c192b5c5fede23bebda0067235ac78605adf1748bd5b7a544"
	I1119 02:01:10.913975   30425 cri.go:89] found id: "139e05f21703a92685b5f507816a8e38f914726f6ef0aa1b6cace7a7821c19fa"
	I1119 02:01:10.913979   30425 cri.go:89] found id: "28a0d1d0eb9de3e99faff9a60e034a22f2550e1d457b6e8c119f0069bb8c2dfb"
	I1119 02:01:10.913983   30425 cri.go:89] found id: "2d72765f224ed9ae16aaaf83613ea7845ee7fcd72d8bd4046856b1ae4dcbe2f6"
	I1119 02:01:10.913985   30425 cri.go:89] found id: "4f2a8fdefa3a921e57d39ce31bcba0663aef48e6cc812fddf8202cddb72408bc"
	I1119 02:01:10.913988   30425 cri.go:89] found id: "fc26589821a5be523a8e87f633779e2f88a1591ee75b009035709790a1af3b55"
	I1119 02:01:10.913990   30425 cri.go:89] found id: "76265018a97b054599e84272f03b9d9c1514d776b5f51869b14493d4baae8651"
	I1119 02:01:10.913992   30425 cri.go:89] found id: "2c19d6084be53e03b3e3ac1e879db7b855bd9d15968238b086b01debd7d271ef"
	I1119 02:01:10.913994   30425 cri.go:89] found id: "caf07801af8b3aabb5593451ab4fa658a97ce71604a86b5c1c38b3f9bde7e597"
	I1119 02:01:10.913997   30425 cri.go:89] found id: "32f9d499f63a2a4e0bec492e9ab5fd077b296adbf502d37c0814b310de798245"
	I1119 02:01:10.913999   30425 cri.go:89] found id: "c9553756abec4455cfb96c8ccb4ac24ebed143c24ffb9c08df7706240e482bac"
	I1119 02:01:10.914001   30425 cri.go:89] found id: ""
	I1119 02:01:10.914039   30425 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 02:01:10.927201   30425 out.go:203] 
	W1119 02:01:10.928486   30425 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:01:10Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:01:10Z" level=error msg="open /run/runc: no such file or directory"
	
	W1119 02:01:10.928506   30425 out.go:285] * 
	* 
	W1119 02:01:10.932016   30425 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1119 02:01:10.933268   30425 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-amd64 -p addons-167289 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (148.91s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-qm258" [f00d3076-67a0-4851-9642-dd7fc9a21c9f] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003406937s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-167289 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-167289 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (245.209864ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 01:58:44.205849   25759 out.go:360] Setting OutFile to fd 1 ...
	I1119 01:58:44.206111   25759 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 01:58:44.206119   25759 out.go:374] Setting ErrFile to fd 2...
	I1119 01:58:44.206123   25759 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 01:58:44.206319   25759 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-11126/.minikube/bin
	I1119 01:58:44.206558   25759 mustload.go:66] Loading cluster: addons-167289
	I1119 01:58:44.206878   25759 config.go:182] Loaded profile config "addons-167289": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 01:58:44.206892   25759 addons.go:607] checking whether the cluster is paused
	I1119 01:58:44.206970   25759 config.go:182] Loaded profile config "addons-167289": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 01:58:44.206981   25759 host.go:66] Checking if "addons-167289" exists ...
	I1119 01:58:44.207315   25759 cli_runner.go:164] Run: docker container inspect addons-167289 --format={{.State.Status}}
	I1119 01:58:44.224025   25759 ssh_runner.go:195] Run: systemctl --version
	I1119 01:58:44.224075   25759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-167289
	I1119 01:58:44.241197   25759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/addons-167289/id_rsa Username:docker}
	I1119 01:58:44.339068   25759 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 01:58:44.339157   25759 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 01:58:44.369580   25759 cri.go:89] found id: "2b3c875b37c34a1aaf8e79a105e8e53fae42de67a4a4a839e6099e2e76e3ee93"
	I1119 01:58:44.369603   25759 cri.go:89] found id: "0057cb6b6d59c2d741aceb29df5771b62c0e82207a84f69ac5154387cbd84153"
	I1119 01:58:44.369609   25759 cri.go:89] found id: "387526d34b521aa97915fbe3e7854312807b05167ee255ed3d4dfbf358eb18ab"
	I1119 01:58:44.369613   25759 cri.go:89] found id: "f44d066a2880c3b89fb901e65c68edf6462e6f5ee4704d445d70bab540e140db"
	I1119 01:58:44.369618   25759 cri.go:89] found id: "d46baa577b02a9113e070c4a0480941b3b25bbbcce455137088c83a4b640d69f"
	I1119 01:58:44.369623   25759 cri.go:89] found id: "3e7307111a0a7ff2319df0e4a44e2dfdd6899963934cd8f81e97fe79104558fe"
	I1119 01:58:44.369627   25759 cri.go:89] found id: "e4525045db437311150f979f145e5df2b15dba4a85832f3b40b56d9e95456c85"
	I1119 01:58:44.369630   25759 cri.go:89] found id: "320316320c36a31575ed518280c787f454599b6f6db11a50abd8a2b071eab8ce"
	I1119 01:58:44.369634   25759 cri.go:89] found id: "77230f6072332b89f67e0a13fc3e2f90a73b685df581bca576a4aa98a0393837"
	I1119 01:58:44.369641   25759 cri.go:89] found id: "4c4521da22d2eb06ed45356e3e80a96ea0146646cd996eb249b4381da1a14456"
	I1119 01:58:44.369646   25759 cri.go:89] found id: "6c5d7a569a83aee258230f3e4101efcec68212fb81bd79541a6db05f42d1a635"
	I1119 01:58:44.369650   25759 cri.go:89] found id: "c45598982d3b30077574919aa2f884686b6cc7cef2866a9077b7aaa5b63ec66f"
	I1119 01:58:44.369654   25759 cri.go:89] found id: "fc07b5bfc14386b4ffa6dbdfb46e833fb2891243713de31478929edea09648dc"
	I1119 01:58:44.369677   25759 cri.go:89] found id: "ee1592f353982b5c192b5c5fede23bebda0067235ac78605adf1748bd5b7a544"
	I1119 01:58:44.369684   25759 cri.go:89] found id: "139e05f21703a92685b5f507816a8e38f914726f6ef0aa1b6cace7a7821c19fa"
	I1119 01:58:44.369695   25759 cri.go:89] found id: "28a0d1d0eb9de3e99faff9a60e034a22f2550e1d457b6e8c119f0069bb8c2dfb"
	I1119 01:58:44.369702   25759 cri.go:89] found id: "2d72765f224ed9ae16aaaf83613ea7845ee7fcd72d8bd4046856b1ae4dcbe2f6"
	I1119 01:58:44.369707   25759 cri.go:89] found id: "4f2a8fdefa3a921e57d39ce31bcba0663aef48e6cc812fddf8202cddb72408bc"
	I1119 01:58:44.369711   25759 cri.go:89] found id: "fc26589821a5be523a8e87f633779e2f88a1591ee75b009035709790a1af3b55"
	I1119 01:58:44.369714   25759 cri.go:89] found id: "76265018a97b054599e84272f03b9d9c1514d776b5f51869b14493d4baae8651"
	I1119 01:58:44.369718   25759 cri.go:89] found id: "2c19d6084be53e03b3e3ac1e879db7b855bd9d15968238b086b01debd7d271ef"
	I1119 01:58:44.369722   25759 cri.go:89] found id: "caf07801af8b3aabb5593451ab4fa658a97ce71604a86b5c1c38b3f9bde7e597"
	I1119 01:58:44.369726   25759 cri.go:89] found id: "32f9d499f63a2a4e0bec492e9ab5fd077b296adbf502d37c0814b310de798245"
	I1119 01:58:44.369732   25759 cri.go:89] found id: "c9553756abec4455cfb96c8ccb4ac24ebed143c24ffb9c08df7706240e482bac"
	I1119 01:58:44.369736   25759 cri.go:89] found id: ""
	I1119 01:58:44.369780   25759 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 01:58:44.384219   25759 out.go:203] 
	W1119 01:58:44.385327   25759 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T01:58:44Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T01:58:44Z" level=error msg="open /run/runc: no such file or directory"
	
	W1119 01:58:44.385343   25759 out.go:285] * 
	* 
	W1119 01:58:44.388421   25759 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1119 01:58:44.389558   25759 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-amd64 -p addons-167289 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (5.25s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.3s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 2.515563ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-j62rx" [a1ece533-9783-484c-94f2-ffb5b35757a1] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.002280249s
addons_test.go:463: (dbg) Run:  kubectl --context addons-167289 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-167289 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-167289 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (239.237406ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 01:58:41.905371   25076 out.go:360] Setting OutFile to fd 1 ...
	I1119 01:58:41.905702   25076 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 01:58:41.905713   25076 out.go:374] Setting ErrFile to fd 2...
	I1119 01:58:41.905719   25076 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 01:58:41.905907   25076 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-11126/.minikube/bin
	I1119 01:58:41.906201   25076 mustload.go:66] Loading cluster: addons-167289
	I1119 01:58:41.906606   25076 config.go:182] Loaded profile config "addons-167289": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 01:58:41.906626   25076 addons.go:607] checking whether the cluster is paused
	I1119 01:58:41.906747   25076 config.go:182] Loaded profile config "addons-167289": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 01:58:41.906762   25076 host.go:66] Checking if "addons-167289" exists ...
	I1119 01:58:41.907153   25076 cli_runner.go:164] Run: docker container inspect addons-167289 --format={{.State.Status}}
	I1119 01:58:41.924667   25076 ssh_runner.go:195] Run: systemctl --version
	I1119 01:58:41.924718   25076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-167289
	I1119 01:58:41.940931   25076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/addons-167289/id_rsa Username:docker}
	I1119 01:58:42.034651   25076 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 01:58:42.034721   25076 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 01:58:42.063576   25076 cri.go:89] found id: "2b3c875b37c34a1aaf8e79a105e8e53fae42de67a4a4a839e6099e2e76e3ee93"
	I1119 01:58:42.063593   25076 cri.go:89] found id: "0057cb6b6d59c2d741aceb29df5771b62c0e82207a84f69ac5154387cbd84153"
	I1119 01:58:42.063597   25076 cri.go:89] found id: "387526d34b521aa97915fbe3e7854312807b05167ee255ed3d4dfbf358eb18ab"
	I1119 01:58:42.063605   25076 cri.go:89] found id: "f44d066a2880c3b89fb901e65c68edf6462e6f5ee4704d445d70bab540e140db"
	I1119 01:58:42.063613   25076 cri.go:89] found id: "d46baa577b02a9113e070c4a0480941b3b25bbbcce455137088c83a4b640d69f"
	I1119 01:58:42.063616   25076 cri.go:89] found id: "3e7307111a0a7ff2319df0e4a44e2dfdd6899963934cd8f81e97fe79104558fe"
	I1119 01:58:42.063618   25076 cri.go:89] found id: "e4525045db437311150f979f145e5df2b15dba4a85832f3b40b56d9e95456c85"
	I1119 01:58:42.063621   25076 cri.go:89] found id: "320316320c36a31575ed518280c787f454599b6f6db11a50abd8a2b071eab8ce"
	I1119 01:58:42.063623   25076 cri.go:89] found id: "77230f6072332b89f67e0a13fc3e2f90a73b685df581bca576a4aa98a0393837"
	I1119 01:58:42.063628   25076 cri.go:89] found id: "4c4521da22d2eb06ed45356e3e80a96ea0146646cd996eb249b4381da1a14456"
	I1119 01:58:42.063630   25076 cri.go:89] found id: "6c5d7a569a83aee258230f3e4101efcec68212fb81bd79541a6db05f42d1a635"
	I1119 01:58:42.063633   25076 cri.go:89] found id: "c45598982d3b30077574919aa2f884686b6cc7cef2866a9077b7aaa5b63ec66f"
	I1119 01:58:42.063637   25076 cri.go:89] found id: "fc07b5bfc14386b4ffa6dbdfb46e833fb2891243713de31478929edea09648dc"
	I1119 01:58:42.063639   25076 cri.go:89] found id: "ee1592f353982b5c192b5c5fede23bebda0067235ac78605adf1748bd5b7a544"
	I1119 01:58:42.063642   25076 cri.go:89] found id: "139e05f21703a92685b5f507816a8e38f914726f6ef0aa1b6cace7a7821c19fa"
	I1119 01:58:42.063646   25076 cri.go:89] found id: "28a0d1d0eb9de3e99faff9a60e034a22f2550e1d457b6e8c119f0069bb8c2dfb"
	I1119 01:58:42.063652   25076 cri.go:89] found id: "2d72765f224ed9ae16aaaf83613ea7845ee7fcd72d8bd4046856b1ae4dcbe2f6"
	I1119 01:58:42.063669   25076 cri.go:89] found id: "4f2a8fdefa3a921e57d39ce31bcba0663aef48e6cc812fddf8202cddb72408bc"
	I1119 01:58:42.063674   25076 cri.go:89] found id: "fc26589821a5be523a8e87f633779e2f88a1591ee75b009035709790a1af3b55"
	I1119 01:58:42.063676   25076 cri.go:89] found id: "76265018a97b054599e84272f03b9d9c1514d776b5f51869b14493d4baae8651"
	I1119 01:58:42.063679   25076 cri.go:89] found id: "2c19d6084be53e03b3e3ac1e879db7b855bd9d15968238b086b01debd7d271ef"
	I1119 01:58:42.063681   25076 cri.go:89] found id: "caf07801af8b3aabb5593451ab4fa658a97ce71604a86b5c1c38b3f9bde7e597"
	I1119 01:58:42.063684   25076 cri.go:89] found id: "32f9d499f63a2a4e0bec492e9ab5fd077b296adbf502d37c0814b310de798245"
	I1119 01:58:42.063686   25076 cri.go:89] found id: "c9553756abec4455cfb96c8ccb4ac24ebed143c24ffb9c08df7706240e482bac"
	I1119 01:58:42.063689   25076 cri.go:89] found id: ""
	I1119 01:58:42.063725   25076 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 01:58:42.077714   25076 out.go:203] 
	W1119 01:58:42.078972   25076 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T01:58:42Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T01:58:42Z" level=error msg="open /run/runc: no such file or directory"
	
	W1119 01:58:42.078991   25076 out.go:285] * 
	* 
	W1119 01:58:42.082078   25076 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1119 01:58:42.083639   25076 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-amd64 -p addons-167289 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.30s)

                                                
                                    
x
+
TestAddons/parallel/CSI (41.47s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1119 01:58:44.395552   14634 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1119 01:58:44.398546   14634 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1119 01:58:44.398566   14634 kapi.go:107] duration metric: took 3.036404ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 3.044388ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-167289 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-167289 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-167289 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-167289 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-167289 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-167289 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-167289 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-167289 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-167289 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-167289 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-167289 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-167289 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-167289 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-167289 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-167289 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-167289 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-167289 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-167289 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-167289 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-167289 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-167289 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [74cd382e-fc53-4f04-9cb5-3d5da50bfe38] Pending
helpers_test.go:352: "task-pv-pod" [74cd382e-fc53-4f04-9cb5-3d5da50bfe38] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [74cd382e-fc53-4f04-9cb5-3d5da50bfe38] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.003464498s
addons_test.go:572: (dbg) Run:  kubectl --context addons-167289 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-167289 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-167289 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-167289 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-167289 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-167289 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-167289 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-167289 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-167289 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-167289 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [310f8e5a-62ba-486e-a734-6d5ba96c2a4b] Pending
helpers_test.go:352: "task-pv-pod-restore" [310f8e5a-62ba-486e-a734-6d5ba96c2a4b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [310f8e5a-62ba-486e-a734-6d5ba96c2a4b] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003628739s
addons_test.go:614: (dbg) Run:  kubectl --context addons-167289 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-167289 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-167289 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-167289 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-167289 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (229.516603ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 01:59:25.455693   28226 out.go:360] Setting OutFile to fd 1 ...
	I1119 01:59:25.455965   28226 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 01:59:25.455975   28226 out.go:374] Setting ErrFile to fd 2...
	I1119 01:59:25.455979   28226 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 01:59:25.456159   28226 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-11126/.minikube/bin
	I1119 01:59:25.456382   28226 mustload.go:66] Loading cluster: addons-167289
	I1119 01:59:25.456694   28226 config.go:182] Loaded profile config "addons-167289": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 01:59:25.456709   28226 addons.go:607] checking whether the cluster is paused
	I1119 01:59:25.456788   28226 config.go:182] Loaded profile config "addons-167289": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 01:59:25.456799   28226 host.go:66] Checking if "addons-167289" exists ...
	I1119 01:59:25.457100   28226 cli_runner.go:164] Run: docker container inspect addons-167289 --format={{.State.Status}}
	I1119 01:59:25.474027   28226 ssh_runner.go:195] Run: systemctl --version
	I1119 01:59:25.474077   28226 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-167289
	I1119 01:59:25.490316   28226 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/addons-167289/id_rsa Username:docker}
	I1119 01:59:25.583654   28226 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 01:59:25.583759   28226 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 01:59:25.611297   28226 cri.go:89] found id: "6bb674f78899cf132b694502d156929f80addc5f2e093e36d38f505f43b4e6ed"
	I1119 01:59:25.611318   28226 cri.go:89] found id: "2b3c875b37c34a1aaf8e79a105e8e53fae42de67a4a4a839e6099e2e76e3ee93"
	I1119 01:59:25.611323   28226 cri.go:89] found id: "0057cb6b6d59c2d741aceb29df5771b62c0e82207a84f69ac5154387cbd84153"
	I1119 01:59:25.611327   28226 cri.go:89] found id: "387526d34b521aa97915fbe3e7854312807b05167ee255ed3d4dfbf358eb18ab"
	I1119 01:59:25.611331   28226 cri.go:89] found id: "f44d066a2880c3b89fb901e65c68edf6462e6f5ee4704d445d70bab540e140db"
	I1119 01:59:25.611341   28226 cri.go:89] found id: "d46baa577b02a9113e070c4a0480941b3b25bbbcce455137088c83a4b640d69f"
	I1119 01:59:25.611345   28226 cri.go:89] found id: "3e7307111a0a7ff2319df0e4a44e2dfdd6899963934cd8f81e97fe79104558fe"
	I1119 01:59:25.611349   28226 cri.go:89] found id: "e4525045db437311150f979f145e5df2b15dba4a85832f3b40b56d9e95456c85"
	I1119 01:59:25.611352   28226 cri.go:89] found id: "320316320c36a31575ed518280c787f454599b6f6db11a50abd8a2b071eab8ce"
	I1119 01:59:25.611365   28226 cri.go:89] found id: "77230f6072332b89f67e0a13fc3e2f90a73b685df581bca576a4aa98a0393837"
	I1119 01:59:25.611375   28226 cri.go:89] found id: "4c4521da22d2eb06ed45356e3e80a96ea0146646cd996eb249b4381da1a14456"
	I1119 01:59:25.611380   28226 cri.go:89] found id: "6c5d7a569a83aee258230f3e4101efcec68212fb81bd79541a6db05f42d1a635"
	I1119 01:59:25.611388   28226 cri.go:89] found id: "c45598982d3b30077574919aa2f884686b6cc7cef2866a9077b7aaa5b63ec66f"
	I1119 01:59:25.611392   28226 cri.go:89] found id: "fc07b5bfc14386b4ffa6dbdfb46e833fb2891243713de31478929edea09648dc"
	I1119 01:59:25.611400   28226 cri.go:89] found id: "ee1592f353982b5c192b5c5fede23bebda0067235ac78605adf1748bd5b7a544"
	I1119 01:59:25.611408   28226 cri.go:89] found id: "139e05f21703a92685b5f507816a8e38f914726f6ef0aa1b6cace7a7821c19fa"
	I1119 01:59:25.611415   28226 cri.go:89] found id: "28a0d1d0eb9de3e99faff9a60e034a22f2550e1d457b6e8c119f0069bb8c2dfb"
	I1119 01:59:25.611422   28226 cri.go:89] found id: "2d72765f224ed9ae16aaaf83613ea7845ee7fcd72d8bd4046856b1ae4dcbe2f6"
	I1119 01:59:25.611425   28226 cri.go:89] found id: "4f2a8fdefa3a921e57d39ce31bcba0663aef48e6cc812fddf8202cddb72408bc"
	I1119 01:59:25.611440   28226 cri.go:89] found id: "fc26589821a5be523a8e87f633779e2f88a1591ee75b009035709790a1af3b55"
	I1119 01:59:25.611446   28226 cri.go:89] found id: "76265018a97b054599e84272f03b9d9c1514d776b5f51869b14493d4baae8651"
	I1119 01:59:25.611452   28226 cri.go:89] found id: "2c19d6084be53e03b3e3ac1e879db7b855bd9d15968238b086b01debd7d271ef"
	I1119 01:59:25.611457   28226 cri.go:89] found id: "caf07801af8b3aabb5593451ab4fa658a97ce71604a86b5c1c38b3f9bde7e597"
	I1119 01:59:25.611464   28226 cri.go:89] found id: "32f9d499f63a2a4e0bec492e9ab5fd077b296adbf502d37c0814b310de798245"
	I1119 01:59:25.611468   28226 cri.go:89] found id: "c9553756abec4455cfb96c8ccb4ac24ebed143c24ffb9c08df7706240e482bac"
	I1119 01:59:25.611473   28226 cri.go:89] found id: ""
	I1119 01:59:25.611521   28226 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 01:59:25.625090   28226 out.go:203] 
	W1119 01:59:25.626415   28226 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T01:59:25Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T01:59:25Z" level=error msg="open /run/runc: no such file or directory"
	
	W1119 01:59:25.626458   28226 out.go:285] * 
	* 
	W1119 01:59:25.629447   28226 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1119 01:59:25.630709   28226 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-amd64 -p addons-167289 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-167289 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-167289 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (227.783025ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 01:59:25.688944   28289 out.go:360] Setting OutFile to fd 1 ...
	I1119 01:59:25.689077   28289 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 01:59:25.689087   28289 out.go:374] Setting ErrFile to fd 2...
	I1119 01:59:25.689091   28289 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 01:59:25.689252   28289 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-11126/.minikube/bin
	I1119 01:59:25.689492   28289 mustload.go:66] Loading cluster: addons-167289
	I1119 01:59:25.689786   28289 config.go:182] Loaded profile config "addons-167289": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 01:59:25.689797   28289 addons.go:607] checking whether the cluster is paused
	I1119 01:59:25.689875   28289 config.go:182] Loaded profile config "addons-167289": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 01:59:25.689885   28289 host.go:66] Checking if "addons-167289" exists ...
	I1119 01:59:25.690226   28289 cli_runner.go:164] Run: docker container inspect addons-167289 --format={{.State.Status}}
	I1119 01:59:25.707742   28289 ssh_runner.go:195] Run: systemctl --version
	I1119 01:59:25.707790   28289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-167289
	I1119 01:59:25.724249   28289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/addons-167289/id_rsa Username:docker}
	I1119 01:59:25.815315   28289 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 01:59:25.815415   28289 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 01:59:25.842305   28289 cri.go:89] found id: "6bb674f78899cf132b694502d156929f80addc5f2e093e36d38f505f43b4e6ed"
	I1119 01:59:25.842324   28289 cri.go:89] found id: "2b3c875b37c34a1aaf8e79a105e8e53fae42de67a4a4a839e6099e2e76e3ee93"
	I1119 01:59:25.842329   28289 cri.go:89] found id: "0057cb6b6d59c2d741aceb29df5771b62c0e82207a84f69ac5154387cbd84153"
	I1119 01:59:25.842333   28289 cri.go:89] found id: "387526d34b521aa97915fbe3e7854312807b05167ee255ed3d4dfbf358eb18ab"
	I1119 01:59:25.842337   28289 cri.go:89] found id: "f44d066a2880c3b89fb901e65c68edf6462e6f5ee4704d445d70bab540e140db"
	I1119 01:59:25.842341   28289 cri.go:89] found id: "d46baa577b02a9113e070c4a0480941b3b25bbbcce455137088c83a4b640d69f"
	I1119 01:59:25.842345   28289 cri.go:89] found id: "3e7307111a0a7ff2319df0e4a44e2dfdd6899963934cd8f81e97fe79104558fe"
	I1119 01:59:25.842349   28289 cri.go:89] found id: "e4525045db437311150f979f145e5df2b15dba4a85832f3b40b56d9e95456c85"
	I1119 01:59:25.842352   28289 cri.go:89] found id: "320316320c36a31575ed518280c787f454599b6f6db11a50abd8a2b071eab8ce"
	I1119 01:59:25.842360   28289 cri.go:89] found id: "77230f6072332b89f67e0a13fc3e2f90a73b685df581bca576a4aa98a0393837"
	I1119 01:59:25.842364   28289 cri.go:89] found id: "4c4521da22d2eb06ed45356e3e80a96ea0146646cd996eb249b4381da1a14456"
	I1119 01:59:25.842369   28289 cri.go:89] found id: "6c5d7a569a83aee258230f3e4101efcec68212fb81bd79541a6db05f42d1a635"
	I1119 01:59:25.842373   28289 cri.go:89] found id: "c45598982d3b30077574919aa2f884686b6cc7cef2866a9077b7aaa5b63ec66f"
	I1119 01:59:25.842377   28289 cri.go:89] found id: "fc07b5bfc14386b4ffa6dbdfb46e833fb2891243713de31478929edea09648dc"
	I1119 01:59:25.842382   28289 cri.go:89] found id: "ee1592f353982b5c192b5c5fede23bebda0067235ac78605adf1748bd5b7a544"
	I1119 01:59:25.842396   28289 cri.go:89] found id: "139e05f21703a92685b5f507816a8e38f914726f6ef0aa1b6cace7a7821c19fa"
	I1119 01:59:25.842405   28289 cri.go:89] found id: "28a0d1d0eb9de3e99faff9a60e034a22f2550e1d457b6e8c119f0069bb8c2dfb"
	I1119 01:59:25.842410   28289 cri.go:89] found id: "2d72765f224ed9ae16aaaf83613ea7845ee7fcd72d8bd4046856b1ae4dcbe2f6"
	I1119 01:59:25.842414   28289 cri.go:89] found id: "4f2a8fdefa3a921e57d39ce31bcba0663aef48e6cc812fddf8202cddb72408bc"
	I1119 01:59:25.842417   28289 cri.go:89] found id: "fc26589821a5be523a8e87f633779e2f88a1591ee75b009035709790a1af3b55"
	I1119 01:59:25.842425   28289 cri.go:89] found id: "76265018a97b054599e84272f03b9d9c1514d776b5f51869b14493d4baae8651"
	I1119 01:59:25.842448   28289 cri.go:89] found id: "2c19d6084be53e03b3e3ac1e879db7b855bd9d15968238b086b01debd7d271ef"
	I1119 01:59:25.842452   28289 cri.go:89] found id: "caf07801af8b3aabb5593451ab4fa658a97ce71604a86b5c1c38b3f9bde7e597"
	I1119 01:59:25.842457   28289 cri.go:89] found id: "32f9d499f63a2a4e0bec492e9ab5fd077b296adbf502d37c0814b310de798245"
	I1119 01:59:25.842462   28289 cri.go:89] found id: "c9553756abec4455cfb96c8ccb4ac24ebed143c24ffb9c08df7706240e482bac"
	I1119 01:59:25.842467   28289 cri.go:89] found id: ""
	I1119 01:59:25.842534   28289 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 01:59:25.855505   28289 out.go:203] 
	W1119 01:59:25.856768   28289 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T01:59:25Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T01:59:25Z" level=error msg="open /run/runc: no such file or directory"
	
	W1119 01:59:25.856784   28289 out.go:285] * 
	* 
	W1119 01:59:25.859730   28289 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1119 01:59:25.861044   28289 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-amd64 -p addons-167289 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (41.47s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (2.36s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-167289 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable headlamp -p addons-167289 --alsologtostderr -v=1: exit status 11 (232.117428ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 01:58:36.837148   24226 out.go:360] Setting OutFile to fd 1 ...
	I1119 01:58:36.837646   24226 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 01:58:36.837657   24226 out.go:374] Setting ErrFile to fd 2...
	I1119 01:58:36.837663   24226 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 01:58:36.837853   24226 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-11126/.minikube/bin
	I1119 01:58:36.838116   24226 mustload.go:66] Loading cluster: addons-167289
	I1119 01:58:36.838471   24226 config.go:182] Loaded profile config "addons-167289": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 01:58:36.838490   24226 addons.go:607] checking whether the cluster is paused
	I1119 01:58:36.838592   24226 config.go:182] Loaded profile config "addons-167289": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 01:58:36.838616   24226 host.go:66] Checking if "addons-167289" exists ...
	I1119 01:58:36.839086   24226 cli_runner.go:164] Run: docker container inspect addons-167289 --format={{.State.Status}}
	I1119 01:58:36.856160   24226 ssh_runner.go:195] Run: systemctl --version
	I1119 01:58:36.856199   24226 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-167289
	I1119 01:58:36.873033   24226 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/addons-167289/id_rsa Username:docker}
	I1119 01:58:36.965355   24226 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 01:58:36.965420   24226 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 01:58:36.992304   24226 cri.go:89] found id: "2b3c875b37c34a1aaf8e79a105e8e53fae42de67a4a4a839e6099e2e76e3ee93"
	I1119 01:58:36.992329   24226 cri.go:89] found id: "0057cb6b6d59c2d741aceb29df5771b62c0e82207a84f69ac5154387cbd84153"
	I1119 01:58:36.992335   24226 cri.go:89] found id: "387526d34b521aa97915fbe3e7854312807b05167ee255ed3d4dfbf358eb18ab"
	I1119 01:58:36.992341   24226 cri.go:89] found id: "f44d066a2880c3b89fb901e65c68edf6462e6f5ee4704d445d70bab540e140db"
	I1119 01:58:36.992363   24226 cri.go:89] found id: "d46baa577b02a9113e070c4a0480941b3b25bbbcce455137088c83a4b640d69f"
	I1119 01:58:36.992368   24226 cri.go:89] found id: "3e7307111a0a7ff2319df0e4a44e2dfdd6899963934cd8f81e97fe79104558fe"
	I1119 01:58:36.992374   24226 cri.go:89] found id: "e4525045db437311150f979f145e5df2b15dba4a85832f3b40b56d9e95456c85"
	I1119 01:58:36.992379   24226 cri.go:89] found id: "320316320c36a31575ed518280c787f454599b6f6db11a50abd8a2b071eab8ce"
	I1119 01:58:36.992387   24226 cri.go:89] found id: "77230f6072332b89f67e0a13fc3e2f90a73b685df581bca576a4aa98a0393837"
	I1119 01:58:36.992395   24226 cri.go:89] found id: "4c4521da22d2eb06ed45356e3e80a96ea0146646cd996eb249b4381da1a14456"
	I1119 01:58:36.992402   24226 cri.go:89] found id: "6c5d7a569a83aee258230f3e4101efcec68212fb81bd79541a6db05f42d1a635"
	I1119 01:58:36.992406   24226 cri.go:89] found id: "c45598982d3b30077574919aa2f884686b6cc7cef2866a9077b7aaa5b63ec66f"
	I1119 01:58:36.992408   24226 cri.go:89] found id: "fc07b5bfc14386b4ffa6dbdfb46e833fb2891243713de31478929edea09648dc"
	I1119 01:58:36.992411   24226 cri.go:89] found id: "ee1592f353982b5c192b5c5fede23bebda0067235ac78605adf1748bd5b7a544"
	I1119 01:58:36.992414   24226 cri.go:89] found id: "139e05f21703a92685b5f507816a8e38f914726f6ef0aa1b6cace7a7821c19fa"
	I1119 01:58:36.992418   24226 cri.go:89] found id: "28a0d1d0eb9de3e99faff9a60e034a22f2550e1d457b6e8c119f0069bb8c2dfb"
	I1119 01:58:36.992424   24226 cri.go:89] found id: "2d72765f224ed9ae16aaaf83613ea7845ee7fcd72d8bd4046856b1ae4dcbe2f6"
	I1119 01:58:36.992427   24226 cri.go:89] found id: "4f2a8fdefa3a921e57d39ce31bcba0663aef48e6cc812fddf8202cddb72408bc"
	I1119 01:58:36.992441   24226 cri.go:89] found id: "fc26589821a5be523a8e87f633779e2f88a1591ee75b009035709790a1af3b55"
	I1119 01:58:36.992445   24226 cri.go:89] found id: "76265018a97b054599e84272f03b9d9c1514d776b5f51869b14493d4baae8651"
	I1119 01:58:36.992449   24226 cri.go:89] found id: "2c19d6084be53e03b3e3ac1e879db7b855bd9d15968238b086b01debd7d271ef"
	I1119 01:58:36.992453   24226 cri.go:89] found id: "caf07801af8b3aabb5593451ab4fa658a97ce71604a86b5c1c38b3f9bde7e597"
	I1119 01:58:36.992459   24226 cri.go:89] found id: "32f9d499f63a2a4e0bec492e9ab5fd077b296adbf502d37c0814b310de798245"
	I1119 01:58:36.992464   24226 cri.go:89] found id: "c9553756abec4455cfb96c8ccb4ac24ebed143c24ffb9c08df7706240e482bac"
	I1119 01:58:36.992468   24226 cri.go:89] found id: ""
	I1119 01:58:36.992506   24226 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 01:58:37.006029   24226 out.go:203] 
	W1119 01:58:37.007483   24226 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T01:58:37Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T01:58:37Z" level=error msg="open /run/runc: no such file or directory"
	
	W1119 01:58:37.007501   24226 out.go:285] * 
	* 
	W1119 01:58:37.010412   24226 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1119 01:58:37.011659   24226 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-amd64 addons enable headlamp -p addons-167289 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-167289
helpers_test.go:243: (dbg) docker inspect addons-167289:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1203decade436e5a99285de163c7efbd34c8e628ce3a9b855c75dee85825f799",
	        "Created": "2025-11-19T01:56:47.824620544Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 16629,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T01:56:47.855109279Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a368e3d71517ce17114afb6c9921965419df972dd0e2d32a9973a8946f0910a3",
	        "ResolvConfPath": "/var/lib/docker/containers/1203decade436e5a99285de163c7efbd34c8e628ce3a9b855c75dee85825f799/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1203decade436e5a99285de163c7efbd34c8e628ce3a9b855c75dee85825f799/hostname",
	        "HostsPath": "/var/lib/docker/containers/1203decade436e5a99285de163c7efbd34c8e628ce3a9b855c75dee85825f799/hosts",
	        "LogPath": "/var/lib/docker/containers/1203decade436e5a99285de163c7efbd34c8e628ce3a9b855c75dee85825f799/1203decade436e5a99285de163c7efbd34c8e628ce3a9b855c75dee85825f799-json.log",
	        "Name": "/addons-167289",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-167289:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-167289",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1203decade436e5a99285de163c7efbd34c8e628ce3a9b855c75dee85825f799",
	                "LowerDir": "/var/lib/docker/overlay2/b471f4f520aa3dc0a41036926c583ed8e6e188a581176a3fbf87df8a1904e828-init/diff:/var/lib/docker/overlay2/10dcbd04dd53463a9d8dc947ef22df9efa1b3a4d07707317c3869fa39d5b22a7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b471f4f520aa3dc0a41036926c583ed8e6e188a581176a3fbf87df8a1904e828/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b471f4f520aa3dc0a41036926c583ed8e6e188a581176a3fbf87df8a1904e828/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b471f4f520aa3dc0a41036926c583ed8e6e188a581176a3fbf87df8a1904e828/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-167289",
	                "Source": "/var/lib/docker/volumes/addons-167289/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-167289",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-167289",
	                "name.minikube.sigs.k8s.io": "addons-167289",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "30075995df53fcbc60726c257c3c4e14775bf796b4eabe17b742c7954574fb34",
	            "SandboxKey": "/var/run/docker/netns/30075995df53",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "Networks": {
	                "addons-167289": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "73031ed466e39bf7065dcfea0bf4a86593aa5b31f488c3d7233eef8fc32876c2",
	                    "EndpointID": "a685cc51d7b6d26d63caa7a962150b0642e1f8b1b8fa14e41ac97ff3d341da54",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "86:3b:cd:26:f8:d9",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-167289",
	                        "1203decade43"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-167289 -n addons-167289
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-167289 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-167289 logs -n 25: (1.052016198s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-684189 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-684189   │ jenkins │ v1.37.0 │ 19 Nov 25 01:56 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 19 Nov 25 01:56 UTC │ 19 Nov 25 01:56 UTC │
	│ delete  │ -p download-only-684189                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-684189   │ jenkins │ v1.37.0 │ 19 Nov 25 01:56 UTC │ 19 Nov 25 01:56 UTC │
	│ start   │ -o=json --download-only -p download-only-306316 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-306316   │ jenkins │ v1.37.0 │ 19 Nov 25 01:56 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 19 Nov 25 01:56 UTC │ 19 Nov 25 01:56 UTC │
	│ delete  │ -p download-only-306316                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-306316   │ jenkins │ v1.37.0 │ 19 Nov 25 01:56 UTC │ 19 Nov 25 01:56 UTC │
	│ delete  │ -p download-only-684189                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-684189   │ jenkins │ v1.37.0 │ 19 Nov 25 01:56 UTC │ 19 Nov 25 01:56 UTC │
	│ delete  │ -p download-only-306316                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-306316   │ jenkins │ v1.37.0 │ 19 Nov 25 01:56 UTC │ 19 Nov 25 01:56 UTC │
	│ start   │ --download-only -p download-docker-148027 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-148027 │ jenkins │ v1.37.0 │ 19 Nov 25 01:56 UTC │                     │
	│ delete  │ -p download-docker-148027                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-148027 │ jenkins │ v1.37.0 │ 19 Nov 25 01:56 UTC │ 19 Nov 25 01:56 UTC │
	│ start   │ --download-only -p binary-mirror-075616 --alsologtostderr --binary-mirror http://127.0.0.1:42109 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-075616   │ jenkins │ v1.37.0 │ 19 Nov 25 01:56 UTC │                     │
	│ delete  │ -p binary-mirror-075616                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-075616   │ jenkins │ v1.37.0 │ 19 Nov 25 01:56 UTC │ 19 Nov 25 01:56 UTC │
	│ addons  │ enable dashboard -p addons-167289                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-167289          │ jenkins │ v1.37.0 │ 19 Nov 25 01:56 UTC │                     │
	│ addons  │ disable dashboard -p addons-167289                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-167289          │ jenkins │ v1.37.0 │ 19 Nov 25 01:56 UTC │                     │
	│ start   │ -p addons-167289 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-167289          │ jenkins │ v1.37.0 │ 19 Nov 25 01:56 UTC │ 19 Nov 25 01:58 UTC │
	│ addons  │ addons-167289 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-167289          │ jenkins │ v1.37.0 │ 19 Nov 25 01:58 UTC │                     │
	│ addons  │ addons-167289 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-167289          │ jenkins │ v1.37.0 │ 19 Nov 25 01:58 UTC │                     │
	│ addons  │ enable headlamp -p addons-167289 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-167289          │ jenkins │ v1.37.0 │ 19 Nov 25 01:58 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 01:56:24
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 01:56:24.835519   15977 out.go:360] Setting OutFile to fd 1 ...
	I1119 01:56:24.835621   15977 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 01:56:24.835634   15977 out.go:374] Setting ErrFile to fd 2...
	I1119 01:56:24.835640   15977 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 01:56:24.835847   15977 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-11126/.minikube/bin
	I1119 01:56:24.836328   15977 out.go:368] Setting JSON to false
	I1119 01:56:24.837186   15977 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2332,"bootTime":1763515053,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 01:56:24.837236   15977 start.go:143] virtualization: kvm guest
	I1119 01:56:24.839177   15977 out.go:179] * [addons-167289] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1119 01:56:24.840496   15977 out.go:179]   - MINIKUBE_LOCATION=21924
	I1119 01:56:24.840495   15977 notify.go:221] Checking for updates...
	I1119 01:56:24.843183   15977 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 01:56:24.844307   15977 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21924-11126/kubeconfig
	I1119 01:56:24.845373   15977 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-11126/.minikube
	I1119 01:56:24.846332   15977 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1119 01:56:24.847339   15977 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 01:56:24.848496   15977 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 01:56:24.869880   15977 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1119 01:56:24.870002   15977 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 01:56:24.921236   15977 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-11-19 01:56:24.912714626 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652060160 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 01:56:24.921337   15977 docker.go:319] overlay module found
	I1119 01:56:24.922896   15977 out.go:179] * Using the docker driver based on user configuration
	I1119 01:56:24.923876   15977 start.go:309] selected driver: docker
	I1119 01:56:24.923890   15977 start.go:930] validating driver "docker" against <nil>
	I1119 01:56:24.923903   15977 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 01:56:24.924382   15977 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 01:56:24.979268   15977 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-11-19 01:56:24.969260228 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652060160 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 01:56:24.979418   15977 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1119 01:56:24.979628   15977 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 01:56:24.981157   15977 out.go:179] * Using Docker driver with root privileges
	I1119 01:56:24.982247   15977 cni.go:84] Creating CNI manager for ""
	I1119 01:56:24.982309   15977 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 01:56:24.982319   15977 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1119 01:56:24.982368   15977 start.go:353] cluster config:
	{Name:addons-167289 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-167289 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1119 01:56:24.983537   15977 out.go:179] * Starting "addons-167289" primary control-plane node in "addons-167289" cluster
	I1119 01:56:24.984652   15977 cache.go:134] Beginning downloading kic base image for docker with crio
	I1119 01:56:24.985660   15977 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1119 01:56:24.986682   15977 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 01:56:24.986708   15977 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21924-11126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1119 01:56:24.986716   15977 cache.go:65] Caching tarball of preloaded images
	I1119 01:56:24.986760   15977 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1119 01:56:24.986822   15977 preload.go:238] Found /home/jenkins/minikube-integration/21924-11126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1119 01:56:24.986837   15977 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1119 01:56:24.987201   15977 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/addons-167289/config.json ...
	I1119 01:56:24.987228   15977 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/addons-167289/config.json: {Name:mk1bcbc978f0a0c87baad2741a38ecbb57ca6166 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 01:56:25.001654   15977 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a to local cache
	I1119 01:56:25.001796   15977 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local cache directory
	I1119 01:56:25.001813   15977 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local cache directory, skipping pull
	I1119 01:56:25.001817   15977 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in cache, skipping pull
	I1119 01:56:25.001823   15977 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a as a tarball
	I1119 01:56:25.001835   15977 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a from local cache
	I1119 01:56:36.716217   15977 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a from cached tarball
	I1119 01:56:36.716279   15977 cache.go:243] Successfully downloaded all kic artifacts
	I1119 01:56:36.716361   15977 start.go:360] acquireMachinesLock for addons-167289: {Name:mk52be43c4a7bd92286dd93acb8c958bd94a02c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 01:56:36.716498   15977 start.go:364] duration metric: took 97.124µs to acquireMachinesLock for "addons-167289"
	I1119 01:56:36.716530   15977 start.go:93] Provisioning new machine with config: &{Name:addons-167289 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-167289 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 01:56:36.716602   15977 start.go:125] createHost starting for "" (driver="docker")
	I1119 01:56:36.718205   15977 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1119 01:56:36.718463   15977 start.go:159] libmachine.API.Create for "addons-167289" (driver="docker")
	I1119 01:56:36.718491   15977 client.go:173] LocalClient.Create starting
	I1119 01:56:36.718575   15977 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem
	I1119 01:56:36.877819   15977 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/cert.pem
	I1119 01:56:37.014895   15977 cli_runner.go:164] Run: docker network inspect addons-167289 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1119 01:56:37.032008   15977 cli_runner.go:211] docker network inspect addons-167289 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1119 01:56:37.032091   15977 network_create.go:284] running [docker network inspect addons-167289] to gather additional debugging logs...
	I1119 01:56:37.032107   15977 cli_runner.go:164] Run: docker network inspect addons-167289
	W1119 01:56:37.046961   15977 cli_runner.go:211] docker network inspect addons-167289 returned with exit code 1
	I1119 01:56:37.046984   15977 network_create.go:287] error running [docker network inspect addons-167289]: docker network inspect addons-167289: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-167289 not found
	I1119 01:56:37.046995   15977 network_create.go:289] output of [docker network inspect addons-167289]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-167289 not found
	
	** /stderr **
	I1119 01:56:37.047088   15977 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 01:56:37.061486   15977 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001c85550}
	I1119 01:56:37.061525   15977 network_create.go:124] attempt to create docker network addons-167289 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1119 01:56:37.061568   15977 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-167289 addons-167289
	I1119 01:56:37.104298   15977 network_create.go:108] docker network addons-167289 192.168.49.0/24 created
	I1119 01:56:37.104322   15977 kic.go:121] calculated static IP "192.168.49.2" for the "addons-167289" container
	I1119 01:56:37.104394   15977 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1119 01:56:37.119042   15977 cli_runner.go:164] Run: docker volume create addons-167289 --label name.minikube.sigs.k8s.io=addons-167289 --label created_by.minikube.sigs.k8s.io=true
	I1119 01:56:37.135147   15977 oci.go:103] Successfully created a docker volume addons-167289
	I1119 01:56:37.135235   15977 cli_runner.go:164] Run: docker run --rm --name addons-167289-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-167289 --entrypoint /usr/bin/test -v addons-167289:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -d /var/lib
	I1119 01:56:43.552192   15977 cli_runner.go:217] Completed: docker run --rm --name addons-167289-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-167289 --entrypoint /usr/bin/test -v addons-167289:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -d /var/lib: (6.416919548s)
	I1119 01:56:43.552221   15977 oci.go:107] Successfully prepared a docker volume addons-167289
	I1119 01:56:43.552278   15977 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 01:56:43.552300   15977 kic.go:194] Starting extracting preloaded images to volume ...
	I1119 01:56:43.552344   15977 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21924-11126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-167289:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir
	I1119 01:56:47.757292   15977 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21924-11126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-167289:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir: (4.204905972s)
	I1119 01:56:47.757322   15977 kic.go:203] duration metric: took 4.205016937s to extract preloaded images to volume ...
	W1119 01:56:47.757414   15977 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1119 01:56:47.757482   15977 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1119 01:56:47.757521   15977 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1119 01:56:47.810662   15977 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-167289 --name addons-167289 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-167289 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-167289 --network addons-167289 --ip 192.168.49.2 --volume addons-167289:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a
	I1119 01:56:48.100601   15977 cli_runner.go:164] Run: docker container inspect addons-167289 --format={{.State.Running}}
	I1119 01:56:48.118079   15977 cli_runner.go:164] Run: docker container inspect addons-167289 --format={{.State.Status}}
	I1119 01:56:48.135602   15977 cli_runner.go:164] Run: docker exec addons-167289 stat /var/lib/dpkg/alternatives/iptables
	I1119 01:56:48.178763   15977 oci.go:144] the created container "addons-167289" has a running status.
	I1119 01:56:48.178794   15977 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21924-11126/.minikube/machines/addons-167289/id_rsa...
	I1119 01:56:48.375807   15977 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21924-11126/.minikube/machines/addons-167289/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1119 01:56:48.413219   15977 cli_runner.go:164] Run: docker container inspect addons-167289 --format={{.State.Status}}
	I1119 01:56:48.431260   15977 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1119 01:56:48.431373   15977 kic_runner.go:114] Args: [docker exec --privileged addons-167289 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1119 01:56:48.481205   15977 cli_runner.go:164] Run: docker container inspect addons-167289 --format={{.State.Status}}
	I1119 01:56:48.499111   15977 machine.go:94] provisionDockerMachine start ...
	I1119 01:56:48.499200   15977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-167289
	I1119 01:56:48.517051   15977 main.go:143] libmachine: Using SSH client type: native
	I1119 01:56:48.517290   15977 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1119 01:56:48.517308   15977 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 01:56:48.648770   15977 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-167289
	
	I1119 01:56:48.648800   15977 ubuntu.go:182] provisioning hostname "addons-167289"
	I1119 01:56:48.648895   15977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-167289
	I1119 01:56:48.666768   15977 main.go:143] libmachine: Using SSH client type: native
	I1119 01:56:48.666973   15977 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1119 01:56:48.666991   15977 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-167289 && echo "addons-167289" | sudo tee /etc/hostname
	I1119 01:56:48.804457   15977 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-167289
	
	I1119 01:56:48.804546   15977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-167289
	I1119 01:56:48.822232   15977 main.go:143] libmachine: Using SSH client type: native
	I1119 01:56:48.822490   15977 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1119 01:56:48.822515   15977 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-167289' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-167289/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-167289' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 01:56:48.949082   15977 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 01:56:48.949109   15977 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21924-11126/.minikube CaCertPath:/home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21924-11126/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21924-11126/.minikube}
	I1119 01:56:48.949136   15977 ubuntu.go:190] setting up certificates
	I1119 01:56:48.949146   15977 provision.go:84] configureAuth start
	I1119 01:56:48.949190   15977 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-167289
	I1119 01:56:48.965294   15977 provision.go:143] copyHostCerts
	I1119 01:56:48.965361   15977 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21924-11126/.minikube/ca.pem (1082 bytes)
	I1119 01:56:48.965510   15977 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21924-11126/.minikube/cert.pem (1123 bytes)
	I1119 01:56:48.965592   15977 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21924-11126/.minikube/key.pem (1675 bytes)
	I1119 01:56:48.965658   15977 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21924-11126/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca-key.pem org=jenkins.addons-167289 san=[127.0.0.1 192.168.49.2 addons-167289 localhost minikube]
	I1119 01:56:49.292476   15977 provision.go:177] copyRemoteCerts
	I1119 01:56:49.292537   15977 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 01:56:49.292569   15977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-167289
	I1119 01:56:49.309206   15977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/addons-167289/id_rsa Username:docker}
	I1119 01:56:49.401495   15977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1119 01:56:49.418622   15977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1119 01:56:49.433842   15977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1119 01:56:49.448900   15977 provision.go:87] duration metric: took 499.743835ms to configureAuth
	I1119 01:56:49.448919   15977 ubuntu.go:206] setting minikube options for container-runtime
	I1119 01:56:49.449060   15977 config.go:182] Loaded profile config "addons-167289": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 01:56:49.449151   15977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-167289
	I1119 01:56:49.465614   15977 main.go:143] libmachine: Using SSH client type: native
	I1119 01:56:49.465830   15977 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1119 01:56:49.465852   15977 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 01:56:49.716393   15977 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 01:56:49.716419   15977 machine.go:97] duration metric: took 1.217285484s to provisionDockerMachine
	I1119 01:56:49.716455   15977 client.go:176] duration metric: took 12.997932509s to LocalClient.Create
	I1119 01:56:49.716478   15977 start.go:167] duration metric: took 12.998013526s to libmachine.API.Create "addons-167289"
	I1119 01:56:49.716488   15977 start.go:293] postStartSetup for "addons-167289" (driver="docker")
	I1119 01:56:49.716499   15977 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 01:56:49.716570   15977 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 01:56:49.716630   15977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-167289
	I1119 01:56:49.733630   15977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/addons-167289/id_rsa Username:docker}
	I1119 01:56:49.826980   15977 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 01:56:49.830119   15977 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 01:56:49.830148   15977 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 01:56:49.830157   15977 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-11126/.minikube/addons for local assets ...
	I1119 01:56:49.830211   15977 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-11126/.minikube/files for local assets ...
	I1119 01:56:49.830234   15977 start.go:296] duration metric: took 113.740218ms for postStartSetup
	I1119 01:56:49.830499   15977 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-167289
	I1119 01:56:49.846945   15977 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/addons-167289/config.json ...
	I1119 01:56:49.847164   15977 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 01:56:49.847201   15977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-167289
	I1119 01:56:49.863138   15977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/addons-167289/id_rsa Username:docker}
	I1119 01:56:49.951724   15977 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 01:56:49.955824   15977 start.go:128] duration metric: took 13.239208195s to createHost
	I1119 01:56:49.955848   15977 start.go:83] releasing machines lock for "addons-167289", held for 13.239332596s
	I1119 01:56:49.955912   15977 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-167289
	I1119 01:56:49.972152   15977 ssh_runner.go:195] Run: cat /version.json
	I1119 01:56:49.972192   15977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-167289
	I1119 01:56:49.972241   15977 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 01:56:49.972308   15977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-167289
	I1119 01:56:49.989273   15977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/addons-167289/id_rsa Username:docker}
	I1119 01:56:49.989286   15977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/addons-167289/id_rsa Username:docker}
	I1119 01:56:50.130548   15977 ssh_runner.go:195] Run: systemctl --version
	I1119 01:56:50.136181   15977 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 01:56:50.167016   15977 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 01:56:50.170961   15977 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 01:56:50.171015   15977 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 01:56:50.194209   15977 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1119 01:56:50.194227   15977 start.go:496] detecting cgroup driver to use...
	I1119 01:56:50.194256   15977 detect.go:190] detected "systemd" cgroup driver on host os
	I1119 01:56:50.194296   15977 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 01:56:50.208696   15977 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 01:56:50.219215   15977 docker.go:218] disabling cri-docker service (if available) ...
	I1119 01:56:50.219269   15977 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 01:56:50.233605   15977 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 01:56:50.248880   15977 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 01:56:50.319820   15977 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 01:56:50.402925   15977 docker.go:234] disabling docker service ...
	I1119 01:56:50.402991   15977 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 01:56:50.419273   15977 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 01:56:50.430488   15977 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 01:56:50.505447   15977 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 01:56:50.578770   15977 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 01:56:50.590282   15977 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 01:56:50.603110   15977 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 01:56:50.603161   15977 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 01:56:50.612695   15977 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1119 01:56:50.612748   15977 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 01:56:50.620813   15977 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 01:56:50.629133   15977 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 01:56:50.636927   15977 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 01:56:50.644134   15977 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 01:56:50.651707   15977 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 01:56:50.663500   15977 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 01:56:50.671047   15977 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 01:56:50.677262   15977 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1119 01:56:50.677323   15977 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1119 01:56:50.688271   15977 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 01:56:50.694774   15977 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 01:56:50.766633   15977 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 01:56:50.892540   15977 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 01:56:50.892614   15977 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 01:56:50.896082   15977 start.go:564] Will wait 60s for crictl version
	I1119 01:56:50.896130   15977 ssh_runner.go:195] Run: which crictl
	I1119 01:56:50.899382   15977 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 01:56:50.922696   15977 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1119 01:56:50.922798   15977 ssh_runner.go:195] Run: crio --version
	I1119 01:56:50.948424   15977 ssh_runner.go:195] Run: crio --version
	I1119 01:56:50.975450   15977 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1119 01:56:50.976599   15977 cli_runner.go:164] Run: docker network inspect addons-167289 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 01:56:50.992869   15977 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1119 01:56:50.996515   15977 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 01:56:51.005831   15977 kubeadm.go:884] updating cluster {Name:addons-167289 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-167289 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 01:56:51.005948   15977 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 01:56:51.006003   15977 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 01:56:51.033312   15977 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 01:56:51.033329   15977 crio.go:433] Images already preloaded, skipping extraction
	I1119 01:56:51.033366   15977 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 01:56:51.056752   15977 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 01:56:51.056785   15977 cache_images.go:86] Images are preloaded, skipping loading
	I1119 01:56:51.056794   15977 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1119 01:56:51.056900   15977 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-167289 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-167289 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 01:56:51.056977   15977 ssh_runner.go:195] Run: crio config
	I1119 01:56:51.098229   15977 cni.go:84] Creating CNI manager for ""
	I1119 01:56:51.098252   15977 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 01:56:51.098270   15977 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 01:56:51.098297   15977 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-167289 NodeName:addons-167289 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 01:56:51.098451   15977 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-167289"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 01:56:51.098516   15977 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 01:56:51.105858   15977 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 01:56:51.105913   15977 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 01:56:51.112704   15977 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1119 01:56:51.123894   15977 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 01:56:51.137256   15977 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1119 01:56:51.148104   15977 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1119 01:56:51.151240   15977 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 01:56:51.159854   15977 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 01:56:51.232683   15977 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 01:56:51.253355   15977 certs.go:69] Setting up /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/addons-167289 for IP: 192.168.49.2
	I1119 01:56:51.253377   15977 certs.go:195] generating shared ca certs ...
	I1119 01:56:51.253395   15977 certs.go:227] acquiring lock for ca certs: {Name:mkfa94aafd627ee4c1a185e05aa520339d3c22d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 01:56:51.253539   15977 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21924-11126/.minikube/ca.key
	I1119 01:56:51.387457   15977 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-11126/.minikube/ca.crt ...
	I1119 01:56:51.387485   15977 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/ca.crt: {Name:mk7624723baa4df6f75e33083adc8e75b09c347a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 01:56:51.387637   15977 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-11126/.minikube/ca.key ...
	I1119 01:56:51.387648   15977 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/ca.key: {Name:mk47a85d97d3efb8b54a9dd78a07e03f896e8596 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 01:56:51.387713   15977 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21924-11126/.minikube/proxy-client-ca.key
	I1119 01:56:51.550105   15977 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-11126/.minikube/proxy-client-ca.crt ...
	I1119 01:56:51.550129   15977 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/proxy-client-ca.crt: {Name:mk03a052d228a2f9c94e95bc8cad8b9967faf6ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 01:56:51.550272   15977 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-11126/.minikube/proxy-client-ca.key ...
	I1119 01:56:51.550283   15977 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/proxy-client-ca.key: {Name:mk19bf33386de4407e06afcb75512ea7f42aac60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 01:56:51.550348   15977 certs.go:257] generating profile certs ...
	I1119 01:56:51.550404   15977 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/addons-167289/client.key
	I1119 01:56:51.550417   15977 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/addons-167289/client.crt with IP's: []
	I1119 01:56:51.695176   15977 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/addons-167289/client.crt ...
	I1119 01:56:51.695208   15977 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/addons-167289/client.crt: {Name:mk6f9d68eb87ffbb51f2d7fcd64ccd78e64e75f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 01:56:51.695342   15977 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/addons-167289/client.key ...
	I1119 01:56:51.695352   15977 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/addons-167289/client.key: {Name:mk90e076134c3c0597612cf93c59d3b0dee365e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 01:56:51.695416   15977 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/addons-167289/apiserver.key.92bd18a6
	I1119 01:56:51.695445   15977 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/addons-167289/apiserver.crt.92bd18a6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1119 01:56:51.984762   15977 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/addons-167289/apiserver.crt.92bd18a6 ...
	I1119 01:56:51.984793   15977 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/addons-167289/apiserver.crt.92bd18a6: {Name:mk9306c98cc8a4b6a63c4265bf787d8931ce2151 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 01:56:51.984974   15977 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/addons-167289/apiserver.key.92bd18a6 ...
	I1119 01:56:51.984991   15977 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/addons-167289/apiserver.key.92bd18a6: {Name:mke9067f3c97ee52a1131efd3fdbc03a2bba0c6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 01:56:51.985087   15977 certs.go:382] copying /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/addons-167289/apiserver.crt.92bd18a6 -> /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/addons-167289/apiserver.crt
	I1119 01:56:51.985189   15977 certs.go:386] copying /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/addons-167289/apiserver.key.92bd18a6 -> /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/addons-167289/apiserver.key
	I1119 01:56:51.985263   15977 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/addons-167289/proxy-client.key
	I1119 01:56:51.985288   15977 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/addons-167289/proxy-client.crt with IP's: []
	I1119 01:56:52.024885   15977 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/addons-167289/proxy-client.crt ...
	I1119 01:56:52.024905   15977 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/addons-167289/proxy-client.crt: {Name:mk4d1f77c92fd7452dd9b21a161766302088c130 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 01:56:52.025023   15977 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/addons-167289/proxy-client.key ...
	I1119 01:56:52.025037   15977 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/addons-167289/proxy-client.key: {Name:mk6d3dac815ead4929147d5e91be6528764b981d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 01:56:52.025218   15977 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca-key.pem (1671 bytes)
	I1119 01:56:52.025261   15977 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem (1082 bytes)
	I1119 01:56:52.025293   15977 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/cert.pem (1123 bytes)
	I1119 01:56:52.025326   15977 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/key.pem (1675 bytes)
	I1119 01:56:52.025860   15977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 01:56:52.042924   15977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1119 01:56:52.059753   15977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 01:56:52.076910   15977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1119 01:56:52.092670   15977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/addons-167289/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1119 01:56:52.108952   15977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/addons-167289/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1119 01:56:52.124768   15977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/addons-167289/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 01:56:52.140225   15977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/addons-167289/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1119 01:56:52.155506   15977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 01:56:52.172564   15977 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 01:56:52.183696   15977 ssh_runner.go:195] Run: openssl version
	I1119 01:56:52.189175   15977 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 01:56:52.198834   15977 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 01:56:52.202092   15977 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 01:56 /usr/share/ca-certificates/minikubeCA.pem
	I1119 01:56:52.202127   15977 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 01:56:52.235336   15977 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 01:56:52.243361   15977 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 01:56:52.246608   15977 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1119 01:56:52.246658   15977 kubeadm.go:401] StartCluster: {Name:addons-167289 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-167289 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 01:56:52.246733   15977 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 01:56:52.246774   15977 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 01:56:52.270758   15977 cri.go:89] found id: ""
	I1119 01:56:52.270809   15977 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 01:56:52.277954   15977 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1119 01:56:52.284942   15977 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1119 01:56:52.284981   15977 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1119 01:56:52.291670   15977 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1119 01:56:52.291700   15977 kubeadm.go:158] found existing configuration files:
	
	I1119 01:56:52.291737   15977 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1119 01:56:52.298752   15977 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1119 01:56:52.298801   15977 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1119 01:56:52.305205   15977 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1119 01:56:52.311800   15977 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1119 01:56:52.311835   15977 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1119 01:56:52.318060   15977 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1119 01:56:52.324767   15977 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1119 01:56:52.324811   15977 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1119 01:56:52.331194   15977 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1119 01:56:52.337787   15977 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1119 01:56:52.337818   15977 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1119 01:56:52.344295   15977 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1119 01:56:52.377648   15977 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1119 01:56:52.377707   15977 kubeadm.go:319] [preflight] Running pre-flight checks
	I1119 01:56:52.396146   15977 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1119 01:56:52.396223   15977 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1119 01:56:52.396255   15977 kubeadm.go:319] OS: Linux
	I1119 01:56:52.396308   15977 kubeadm.go:319] CGROUPS_CPU: enabled
	I1119 01:56:52.396362   15977 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1119 01:56:52.396414   15977 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1119 01:56:52.396510   15977 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1119 01:56:52.396558   15977 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1119 01:56:52.396658   15977 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1119 01:56:52.396746   15977 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1119 01:56:52.396820   15977 kubeadm.go:319] CGROUPS_IO: enabled
	I1119 01:56:52.447127   15977 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1119 01:56:52.447253   15977 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1119 01:56:52.447391   15977 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1119 01:56:52.453951   15977 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1119 01:56:52.455977   15977 out.go:252]   - Generating certificates and keys ...
	I1119 01:56:52.456078   15977 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1119 01:56:52.456179   15977 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1119 01:56:52.616978   15977 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1119 01:56:52.691989   15977 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1119 01:56:52.877079   15977 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1119 01:56:53.267797   15977 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1119 01:56:53.532944   15977 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1119 01:56:53.533123   15977 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-167289 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1119 01:56:53.714275   15977 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1119 01:56:53.714473   15977 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-167289 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1119 01:56:53.910145   15977 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1119 01:56:54.092452   15977 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1119 01:56:54.505627   15977 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1119 01:56:54.505697   15977 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1119 01:56:54.705083   15977 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1119 01:56:54.827841   15977 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1119 01:56:55.203616   15977 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1119 01:56:56.065668   15977 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1119 01:56:56.299017   15977 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1119 01:56:56.299457   15977 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1119 01:56:56.302974   15977 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1119 01:56:56.305843   15977 out.go:252]   - Booting up control plane ...
	I1119 01:56:56.305956   15977 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1119 01:56:56.306070   15977 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1119 01:56:56.306185   15977 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1119 01:56:56.317461   15977 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1119 01:56:56.317582   15977 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1119 01:56:56.324424   15977 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1119 01:56:56.324924   15977 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1119 01:56:56.324992   15977 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1119 01:56:56.411922   15977 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1119 01:56:56.412038   15977 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1119 01:56:56.913498   15977 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.569186ms
	I1119 01:56:56.916327   15977 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1119 01:56:56.916412   15977 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1119 01:56:56.916545   15977 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1119 01:56:56.916631   15977 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1119 01:56:58.253703   15977 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.337270629s
	I1119 01:56:59.109663   15977 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.193280797s
	I1119 01:57:00.917456   15977 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001043439s
	I1119 01:57:00.928067   15977 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1119 01:57:00.936953   15977 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1119 01:57:00.944485   15977 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1119 01:57:00.944727   15977 kubeadm.go:319] [mark-control-plane] Marking the node addons-167289 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1119 01:57:00.951995   15977 kubeadm.go:319] [bootstrap-token] Using token: 0jiben.vo8mj3kr3cd8jvp6
	I1119 01:57:00.953214   15977 out.go:252]   - Configuring RBAC rules ...
	I1119 01:57:00.953349   15977 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1119 01:57:00.955996   15977 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1119 01:57:00.960359   15977 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1119 01:57:00.963400   15977 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1119 01:57:00.965302   15977 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1119 01:57:00.967389   15977 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1119 01:57:01.322588   15977 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1119 01:57:01.735061   15977 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1119 01:57:02.322282   15977 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1119 01:57:02.322984   15977 kubeadm.go:319] 
	I1119 01:57:02.323043   15977 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1119 01:57:02.323070   15977 kubeadm.go:319] 
	I1119 01:57:02.323166   15977 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1119 01:57:02.323176   15977 kubeadm.go:319] 
	I1119 01:57:02.323205   15977 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1119 01:57:02.323273   15977 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1119 01:57:02.323346   15977 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1119 01:57:02.323362   15977 kubeadm.go:319] 
	I1119 01:57:02.323480   15977 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1119 01:57:02.323491   15977 kubeadm.go:319] 
	I1119 01:57:02.323563   15977 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1119 01:57:02.323570   15977 kubeadm.go:319] 
	I1119 01:57:02.323649   15977 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1119 01:57:02.323753   15977 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1119 01:57:02.323863   15977 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1119 01:57:02.323878   15977 kubeadm.go:319] 
	I1119 01:57:02.324001   15977 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1119 01:57:02.324121   15977 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1119 01:57:02.324137   15977 kubeadm.go:319] 
	I1119 01:57:02.324246   15977 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 0jiben.vo8mj3kr3cd8jvp6 \
	I1119 01:57:02.324401   15977 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4165d7789cb5b41e2fdbecffedd9aece71a7c390b7f673c978e979d9f1b4ab32 \
	I1119 01:57:02.324457   15977 kubeadm.go:319] 	--control-plane 
	I1119 01:57:02.324472   15977 kubeadm.go:319] 
	I1119 01:57:02.324586   15977 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1119 01:57:02.324595   15977 kubeadm.go:319] 
	I1119 01:57:02.324716   15977 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 0jiben.vo8mj3kr3cd8jvp6 \
	I1119 01:57:02.324881   15977 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4165d7789cb5b41e2fdbecffedd9aece71a7c390b7f673c978e979d9f1b4ab32 
	I1119 01:57:02.326276   15977 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1119 01:57:02.326445   15977 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1119 01:57:02.326482   15977 cni.go:84] Creating CNI manager for ""
	I1119 01:57:02.326498   15977 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 01:57:02.328327   15977 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1119 01:57:02.329374   15977 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1119 01:57:02.333275   15977 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1119 01:57:02.333289   15977 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1119 01:57:02.345721   15977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1119 01:57:02.536658   15977 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1119 01:57:02.536732   15977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 01:57:02.536765   15977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-167289 minikube.k8s.io/updated_at=2025_11_19T01_57_02_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=ebd49efe8b7d44f89fc96e21beffcd64dfee8277 minikube.k8s.io/name=addons-167289 minikube.k8s.io/primary=true
	I1119 01:57:02.618814   15977 ops.go:34] apiserver oom_adj: -16
	I1119 01:57:02.618856   15977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 01:57:03.119948   15977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 01:57:03.619280   15977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 01:57:04.119229   15977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 01:57:04.619579   15977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 01:57:05.119283   15977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 01:57:05.619695   15977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 01:57:06.119494   15977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 01:57:06.619864   15977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 01:57:07.119524   15977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 01:57:07.181709   15977 kubeadm.go:1114] duration metric: took 4.645035612s to wait for elevateKubeSystemPrivileges
	I1119 01:57:07.181743   15977 kubeadm.go:403] duration metric: took 14.935090313s to StartCluster
	I1119 01:57:07.181762   15977 settings.go:142] acquiring lock: {Name:mk39bd61177f2f8808b442f427cccc3032092975 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 01:57:07.181875   15977 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21924-11126/kubeconfig
	I1119 01:57:07.182220   15977 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/kubeconfig: {Name:mkb0d0ec188d51add3fa67ae624c9c11581068d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 01:57:07.182390   15977 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1119 01:57:07.182415   15977 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 01:57:07.182499   15977 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1119 01:57:07.182634   15977 config.go:182] Loaded profile config "addons-167289": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 01:57:07.182651   15977 addons.go:70] Setting cloud-spanner=true in profile "addons-167289"
	I1119 01:57:07.182658   15977 addons.go:70] Setting ingress-dns=true in profile "addons-167289"
	I1119 01:57:07.182660   15977 addons.go:70] Setting volumesnapshots=true in profile "addons-167289"
	I1119 01:57:07.182675   15977 addons.go:239] Setting addon cloud-spanner=true in "addons-167289"
	I1119 01:57:07.182634   15977 addons.go:70] Setting yakd=true in profile "addons-167289"
	I1119 01:57:07.182683   15977 addons.go:70] Setting inspektor-gadget=true in profile "addons-167289"
	I1119 01:57:07.182706   15977 addons.go:70] Setting metrics-server=true in profile "addons-167289"
	I1119 01:57:07.182716   15977 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-167289"
	I1119 01:57:07.182720   15977 addons.go:70] Setting storage-provisioner=true in profile "addons-167289"
	I1119 01:57:07.182694   15977 addons.go:239] Setting addon yakd=true in "addons-167289"
	I1119 01:57:07.182741   15977 addons.go:239] Setting addon storage-provisioner=true in "addons-167289"
	I1119 01:57:07.182747   15977 addons.go:70] Setting gcp-auth=true in profile "addons-167289"
	I1119 01:57:07.182725   15977 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-167289"
	I1119 01:57:07.182764   15977 host.go:66] Checking if "addons-167289" exists ...
	I1119 01:57:07.182765   15977 mustload.go:66] Loading cluster: addons-167289
	I1119 01:57:07.182772   15977 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-167289"
	I1119 01:57:07.182798   15977 host.go:66] Checking if "addons-167289" exists ...
	I1119 01:57:07.182809   15977 host.go:66] Checking if "addons-167289" exists ...
	I1119 01:57:07.182709   15977 host.go:66] Checking if "addons-167289" exists ...
	I1119 01:57:07.182653   15977 addons.go:70] Setting volcano=true in profile "addons-167289"
	I1119 01:57:07.182934   15977 addons.go:239] Setting addon volcano=true in "addons-167289"
	I1119 01:57:07.182948   15977 config.go:182] Loaded profile config "addons-167289": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 01:57:07.182963   15977 host.go:66] Checking if "addons-167289" exists ...
	I1119 01:57:07.183181   15977 cli_runner.go:164] Run: docker container inspect addons-167289 --format={{.State.Status}}
	I1119 01:57:07.183284   15977 cli_runner.go:164] Run: docker container inspect addons-167289 --format={{.State.Status}}
	I1119 01:57:07.183311   15977 cli_runner.go:164] Run: docker container inspect addons-167289 --format={{.State.Status}}
	I1119 01:57:07.183343   15977 cli_runner.go:164] Run: docker container inspect addons-167289 --format={{.State.Status}}
	I1119 01:57:07.183385   15977 cli_runner.go:164] Run: docker container inspect addons-167289 --format={{.State.Status}}
	I1119 01:57:07.182724   15977 addons.go:239] Setting addon metrics-server=true in "addons-167289"
	I1119 01:57:07.183417   15977 host.go:66] Checking if "addons-167289" exists ...
	I1119 01:57:07.182633   15977 addons.go:70] Setting ingress=true in profile "addons-167289"
	I1119 01:57:07.183606   15977 addons.go:239] Setting addon ingress=true in "addons-167289"
	I1119 01:57:07.183647   15977 host.go:66] Checking if "addons-167289" exists ...
	I1119 01:57:07.183935   15977 cli_runner.go:164] Run: docker container inspect addons-167289 --format={{.State.Status}}
	I1119 01:57:07.182650   15977 addons.go:70] Setting registry=true in profile "addons-167289"
	I1119 01:57:07.183979   15977 addons.go:239] Setting addon registry=true in "addons-167289"
	I1119 01:57:07.184006   15977 host.go:66] Checking if "addons-167289" exists ...
	I1119 01:57:07.184094   15977 cli_runner.go:164] Run: docker container inspect addons-167289 --format={{.State.Status}}
	I1119 01:57:07.184451   15977 out.go:179] * Verifying Kubernetes components...
	I1119 01:57:07.184923   15977 cli_runner.go:164] Run: docker container inspect addons-167289 --format={{.State.Status}}
	I1119 01:57:07.182676   15977 addons.go:239] Setting addon ingress-dns=true in "addons-167289"
	I1119 01:57:07.185472   15977 host.go:66] Checking if "addons-167289" exists ...
	I1119 01:57:07.182680   15977 addons.go:239] Setting addon volumesnapshots=true in "addons-167289"
	I1119 01:57:07.185720   15977 host.go:66] Checking if "addons-167289" exists ...
	I1119 01:57:07.185932   15977 cli_runner.go:164] Run: docker container inspect addons-167289 --format={{.State.Status}}
	I1119 01:57:07.186200   15977 cli_runner.go:164] Run: docker container inspect addons-167289 --format={{.State.Status}}
	I1119 01:57:07.183313   15977 cli_runner.go:164] Run: docker container inspect addons-167289 --format={{.State.Status}}
	I1119 01:57:07.182728   15977 addons.go:239] Setting addon inspektor-gadget=true in "addons-167289"
	I1119 01:57:07.187558   15977 host.go:66] Checking if "addons-167289" exists ...
	I1119 01:57:07.182748   15977 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-167289"
	I1119 01:57:07.191791   15977 host.go:66] Checking if "addons-167289" exists ...
	I1119 01:57:07.182645   15977 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-167289"
	I1119 01:57:07.182645   15977 addons.go:70] Setting registry-creds=true in profile "addons-167289"
	I1119 01:57:07.182696   15977 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-167289"
	I1119 01:57:07.182757   15977 addons.go:70] Setting default-storageclass=true in profile "addons-167289"
	I1119 01:57:07.191879   15977 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-167289"
	I1119 01:57:07.191934   15977 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-167289"
	I1119 01:57:07.191953   15977 host.go:66] Checking if "addons-167289" exists ...
	I1119 01:57:07.192443   15977 cli_runner.go:164] Run: docker container inspect addons-167289 --format={{.State.Status}}
	I1119 01:57:07.192471   15977 addons.go:239] Setting addon registry-creds=true in "addons-167289"
	I1119 01:57:07.192501   15977 host.go:66] Checking if "addons-167289" exists ...
	I1119 01:57:07.192968   15977 cli_runner.go:164] Run: docker container inspect addons-167289 --format={{.State.Status}}
	I1119 01:57:07.193293   15977 cli_runner.go:164] Run: docker container inspect addons-167289 --format={{.State.Status}}
	I1119 01:57:07.193688   15977 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-167289"
	I1119 01:57:07.194018   15977 cli_runner.go:164] Run: docker container inspect addons-167289 --format={{.State.Status}}
	I1119 01:57:07.194104   15977 cli_runner.go:164] Run: docker container inspect addons-167289 --format={{.State.Status}}
	I1119 01:57:07.194304   15977 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 01:57:07.197366   15977 cli_runner.go:164] Run: docker container inspect addons-167289 --format={{.State.Status}}
	I1119 01:57:07.232175   15977 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.43
	I1119 01:57:07.233775   15977 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1119 01:57:07.233794   15977 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1119 01:57:07.233853   15977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-167289
	I1119 01:57:07.241465   15977 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1119 01:57:07.252652   15977 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1119 01:57:07.254964   15977 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1119 01:57:07.255007   15977 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1119 01:57:07.255076   15977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-167289
	I1119 01:57:07.261562   15977 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1119 01:57:07.261644   15977 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1119 01:57:07.262685   15977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-167289
	W1119 01:57:07.267538   15977 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1119 01:57:07.270512   15977 out.go:179]   - Using image docker.io/registry:3.0.0
	I1119 01:57:07.271544   15977 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1119 01:57:07.272974   15977 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1119 01:57:07.273033   15977 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1119 01:57:07.271835   15977 host.go:66] Checking if "addons-167289" exists ...
	I1119 01:57:07.274124   15977 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1119 01:57:07.274144   15977 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1119 01:57:07.274203   15977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-167289
	I1119 01:57:07.275338   15977 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 01:57:07.276517   15977 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 01:57:07.276535   15977 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 01:57:07.276638   15977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-167289
	I1119 01:57:07.276845   15977 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1119 01:57:07.278030   15977 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1119 01:57:07.278092   15977 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1119 01:57:07.278168   15977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-167289
	I1119 01:57:07.283628   15977 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1119 01:57:07.283706   15977 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1119 01:57:07.284156   15977 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1119 01:57:07.285284   15977 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1119 01:57:07.285306   15977 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1119 01:57:07.285353   15977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-167289
	I1119 01:57:07.286681   15977 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1119 01:57:07.286700   15977 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1119 01:57:07.286701   15977 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1119 01:57:07.286712   15977 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1119 01:57:07.286758   15977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-167289
	I1119 01:57:07.286782   15977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-167289
	I1119 01:57:07.299378   15977 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-167289"
	I1119 01:57:07.299570   15977 host.go:66] Checking if "addons-167289" exists ...
	I1119 01:57:07.300867   15977 cli_runner.go:164] Run: docker container inspect addons-167289 --format={{.State.Status}}
	I1119 01:57:07.307562   15977 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1119 01:57:07.308025   15977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/addons-167289/id_rsa Username:docker}
	I1119 01:57:07.310876   15977 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1119 01:57:07.310899   15977 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1119 01:57:07.310950   15977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-167289
	I1119 01:57:07.318483   15977 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1119 01:57:07.318637   15977 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1119 01:57:07.320241   15977 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1119 01:57:07.320260   15977 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1119 01:57:07.320318   15977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-167289
	I1119 01:57:07.321269   15977 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1119 01:57:07.321289   15977 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1119 01:57:07.321337   15977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-167289
	I1119 01:57:07.326578   15977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/addons-167289/id_rsa Username:docker}
	I1119 01:57:07.334114   15977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/addons-167289/id_rsa Username:docker}
	I1119 01:57:07.335037   15977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/addons-167289/id_rsa Username:docker}
	I1119 01:57:07.335136   15977 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1119 01:57:07.336384   15977 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1119 01:57:07.337621   15977 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1119 01:57:07.338864   15977 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1119 01:57:07.339959   15977 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1119 01:57:07.340253   15977 addons.go:239] Setting addon default-storageclass=true in "addons-167289"
	I1119 01:57:07.340361   15977 host.go:66] Checking if "addons-167289" exists ...
	I1119 01:57:07.341406   15977 cli_runner.go:164] Run: docker container inspect addons-167289 --format={{.State.Status}}
	I1119 01:57:07.342123   15977 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1119 01:57:07.345704   15977 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1119 01:57:07.346996   15977 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1119 01:57:07.347582   15977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/addons-167289/id_rsa Username:docker}
	I1119 01:57:07.348025   15977 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1119 01:57:07.348049   15977 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1119 01:57:07.348102   15977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-167289
	I1119 01:57:07.355282   15977 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1119 01:57:07.364095   15977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/addons-167289/id_rsa Username:docker}
	I1119 01:57:07.367803   15977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/addons-167289/id_rsa Username:docker}
	I1119 01:57:07.372263   15977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/addons-167289/id_rsa Username:docker}
	I1119 01:57:07.389094   15977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/addons-167289/id_rsa Username:docker}
	I1119 01:57:07.393662   15977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/addons-167289/id_rsa Username:docker}
	I1119 01:57:07.407349   15977 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1119 01:57:07.410139   15977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/addons-167289/id_rsa Username:docker}
	I1119 01:57:07.410792   15977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/addons-167289/id_rsa Username:docker}
	I1119 01:57:07.411871   15977 out.go:179]   - Using image docker.io/busybox:stable
	I1119 01:57:07.411994   15977 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 01:57:07.412011   15977 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 01:57:07.412058   15977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-167289
	I1119 01:57:07.412996   15977 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1119 01:57:07.413036   15977 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1119 01:57:07.413082   15977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-167289
	I1119 01:57:07.414011   15977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/addons-167289/id_rsa Username:docker}
	W1119 01:57:07.415689   15977 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1119 01:57:07.415750   15977 retry.go:31] will retry after 330.560053ms: ssh: handshake failed: EOF
	I1119 01:57:07.419615   15977 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 01:57:07.443361   15977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/addons-167289/id_rsa Username:docker}
	I1119 01:57:07.453636   15977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/addons-167289/id_rsa Username:docker}
	I1119 01:57:07.500323   15977 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1119 01:57:07.501005   15977 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1119 01:57:07.509680   15977 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1119 01:57:07.509705   15977 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1119 01:57:07.509885   15977 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1119 01:57:07.509899   15977 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1119 01:57:07.521120   15977 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1119 01:57:07.525648   15977 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1119 01:57:07.526733   15977 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 01:57:07.530958   15977 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1119 01:57:07.538081   15977 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1119 01:57:07.538098   15977 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1119 01:57:07.543467   15977 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1119 01:57:07.543527   15977 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1119 01:57:07.552704   15977 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1119 01:57:07.570847   15977 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1119 01:57:07.570873   15977 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1119 01:57:07.574582   15977 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1119 01:57:07.574602   15977 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1119 01:57:07.577366   15977 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1119 01:57:07.591333   15977 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1119 01:57:07.591412   15977 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1119 01:57:07.594959   15977 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1119 01:57:07.603495   15977 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1119 01:57:07.603837   15977 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 01:57:07.613237   15977 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1119 01:57:07.613259   15977 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1119 01:57:07.619850   15977 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1119 01:57:07.619874   15977 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1119 01:57:07.645512   15977 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1119 01:57:07.645554   15977 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1119 01:57:07.656396   15977 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1119 01:57:07.656424   15977 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1119 01:57:07.668559   15977 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1119 01:57:07.668588   15977 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1119 01:57:07.702657   15977 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1119 01:57:07.710399   15977 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1119 01:57:07.710439   15977 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1119 01:57:07.719333   15977 node_ready.go:35] waiting up to 6m0s for node "addons-167289" to be "Ready" ...
	I1119 01:57:07.719779   15977 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1119 01:57:07.727571   15977 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1119 01:57:07.727599   15977 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1119 01:57:07.754760   15977 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1119 01:57:07.754808   15977 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1119 01:57:07.795532   15977 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1119 01:57:07.795559   15977 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1119 01:57:07.812210   15977 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1119 01:57:07.812234   15977 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1119 01:57:07.869769   15977 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1119 01:57:07.869803   15977 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1119 01:57:07.873364   15977 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1119 01:57:07.921856   15977 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1119 01:57:07.921884   15977 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1119 01:57:07.985659   15977 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1119 01:57:07.985684   15977 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1119 01:57:07.993671   15977 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1119 01:57:07.993761   15977 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1119 01:57:08.035890   15977 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1119 01:57:08.036001   15977 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1119 01:57:08.049831   15977 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1119 01:57:08.049856   15977 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1119 01:57:08.094498   15977 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1119 01:57:08.095977   15977 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1119 01:57:08.096045   15977 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1119 01:57:08.141880   15977 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1119 01:57:08.231633   15977 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-167289" context rescaled to 1 replicas
	I1119 01:57:08.657396   15977 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.156347683s)
	I1119 01:57:08.657455   15977 addons.go:480] Verifying addon ingress=true in "addons-167289"
	I1119 01:57:08.657495   15977 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (1.13634593s)
	I1119 01:57:08.657572   15977 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (1.131892919s)
	I1119 01:57:08.657661   15977 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.130902418s)
	I1119 01:57:08.657730   15977 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.126734908s)
	I1119 01:57:08.657791   15977 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (1.105064107s)
	I1119 01:57:08.657830   15977 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.080441497s)
	I1119 01:57:08.657845   15977 addons.go:480] Verifying addon registry=true in "addons-167289"
	I1119 01:57:08.657928   15977 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.062944454s)
	I1119 01:57:08.658010   15977 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.054450482s)
	I1119 01:57:08.658038   15977 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.054121439s)
	I1119 01:57:08.660054   15977 out.go:179] * Verifying registry addon...
	I1119 01:57:08.660056   15977 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-167289 service yakd-dashboard -n yakd-dashboard
	
	I1119 01:57:08.660063   15977 out.go:179] * Verifying ingress addon...
	I1119 01:57:08.662279   15977 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1119 01:57:08.662722   15977 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1119 01:57:08.664957   15977 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1119 01:57:08.664977   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:08.665071   15977 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1119 01:57:08.665091   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 01:57:08.665385   15977 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1119 01:57:09.104203   15977 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.230792489s)
	W1119 01:57:09.104258   15977 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1119 01:57:09.104279   15977 retry.go:31] will retry after 185.166277ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1119 01:57:09.104364   15977 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.009816355s)
	I1119 01:57:09.104403   15977 addons.go:480] Verifying addon csi-hostpath-driver=true in "addons-167289"
	I1119 01:57:09.104420   15977 addons.go:480] Verifying addon metrics-server=true in "addons-167289"
	I1119 01:57:09.106028   15977 out.go:179] * Verifying csi-hostpath-driver addon...
	I1119 01:57:09.107988   15977 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1119 01:57:09.110371   15977 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1119 01:57:09.110390   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:09.210416   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:09.210470   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:09.290139   15977 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1119 01:57:09.611278   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:09.665403   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:09.665508   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 01:57:09.721797   15977 node_ready.go:57] node "addons-167289" has "Ready":"False" status (will retry)
	I1119 01:57:10.110359   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:10.165602   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:10.165610   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:10.610748   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:10.664960   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:10.665052   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:11.110734   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:11.164693   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:11.164854   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:11.610879   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:11.665026   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:11.665219   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 01:57:11.722295   15977 node_ready.go:57] node "addons-167289" has "Ready":"False" status (will retry)
	I1119 01:57:11.728071   15977 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.43789244s)
	I1119 01:57:12.111092   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:12.164933   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:12.165143   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:12.611351   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:12.664411   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:12.665333   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:13.110919   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:13.164823   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:13.164993   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:13.610846   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:13.664813   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:13.664878   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:14.111267   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:14.165222   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:14.165366   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 01:57:14.222196   15977 node_ready.go:57] node "addons-167289" has "Ready":"False" status (will retry)
	I1119 01:57:14.611198   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:14.665344   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:14.665347   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:14.887830   15977 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1119 01:57:14.887895   15977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-167289
	I1119 01:57:14.906331   15977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/addons-167289/id_rsa Username:docker}
	I1119 01:57:15.010676   15977 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1119 01:57:15.022034   15977 addons.go:239] Setting addon gcp-auth=true in "addons-167289"
	I1119 01:57:15.022086   15977 host.go:66] Checking if "addons-167289" exists ...
	I1119 01:57:15.022408   15977 cli_runner.go:164] Run: docker container inspect addons-167289 --format={{.State.Status}}
	I1119 01:57:15.038501   15977 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1119 01:57:15.038549   15977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-167289
	I1119 01:57:15.054562   15977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/addons-167289/id_rsa Username:docker}
	I1119 01:57:15.110492   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:15.145546   15977 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1119 01:57:15.146670   15977 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1119 01:57:15.147677   15977 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1119 01:57:15.147691   15977 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1119 01:57:15.159311   15977 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1119 01:57:15.159328   15977 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1119 01:57:15.165647   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:15.165710   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:15.171019   15977 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1119 01:57:15.171036   15977 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1119 01:57:15.182458   15977 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1119 01:57:15.457426   15977 addons.go:480] Verifying addon gcp-auth=true in "addons-167289"
	I1119 01:57:15.458906   15977 out.go:179] * Verifying gcp-auth addon...
	I1119 01:57:15.460827   15977 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1119 01:57:15.462859   15977 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1119 01:57:15.462885   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:15.611161   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:15.665208   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:15.665350   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:15.963587   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:16.111124   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:16.165225   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:16.165384   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 01:57:16.222330   15977 node_ready.go:57] node "addons-167289" has "Ready":"False" status (will retry)
	I1119 01:57:16.464324   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:16.610867   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:16.664758   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:16.664955   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:16.963644   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:17.111054   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:17.165165   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:17.165349   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:17.463694   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:17.610947   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:17.665055   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:17.665095   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:17.963360   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:18.110706   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:18.164733   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:18.164818   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:18.463108   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:18.610425   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:18.664413   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:18.665385   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 01:57:18.722291   15977 node_ready.go:57] node "addons-167289" has "Ready":"False" status (will retry)
	I1119 01:57:18.963725   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:19.110985   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:19.165079   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:19.165326   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:19.463461   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:19.610614   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:19.664625   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:19.664781   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:19.964046   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:20.110529   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:20.164574   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:20.164812   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:20.464265   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:20.610736   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:20.664879   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:20.665024   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:20.963180   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:21.110498   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:21.164530   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:21.165460   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 01:57:21.222481   15977 node_ready.go:57] node "addons-167289" has "Ready":"False" status (will retry)
	I1119 01:57:21.463770   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:21.611049   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:21.665168   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:21.665392   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:21.963612   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:22.110926   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:22.165080   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:22.165262   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:22.463616   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:22.610941   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:22.664912   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:22.665061   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:22.963399   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:23.110685   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:23.164879   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:23.164954   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:23.463217   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:23.610313   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:23.665517   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:23.665662   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 01:57:23.721351   15977 node_ready.go:57] node "addons-167289" has "Ready":"False" status (will retry)
	I1119 01:57:23.963724   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:24.110976   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:24.165122   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:24.165309   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:24.463740   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:24.610740   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:24.664739   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:24.664914   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:24.963768   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:25.111124   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:25.165112   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:25.165251   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:25.463659   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:25.611091   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:25.665016   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:25.665231   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 01:57:25.722202   15977 node_ready.go:57] node "addons-167289" has "Ready":"False" status (will retry)
	I1119 01:57:25.963579   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:26.110810   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:26.164881   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:26.165029   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:26.463704   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:26.611322   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:26.665235   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:26.665508   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:26.963690   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:27.111163   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:27.165169   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:27.165235   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:27.463805   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:27.610990   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:27.665068   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:27.665194   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:27.963518   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:28.111047   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:28.165262   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:28.165450   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 01:57:28.222566   15977 node_ready.go:57] node "addons-167289" has "Ready":"False" status (will retry)
	I1119 01:57:28.463931   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:28.611316   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:28.664557   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:28.665420   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:28.963830   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:29.111142   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:29.165265   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:29.165491   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:29.464117   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:29.610233   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:29.665350   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:29.665364   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:29.963752   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:30.110900   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:30.165107   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:30.165160   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:30.463817   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:30.610943   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:30.665015   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:30.665179   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 01:57:30.722132   15977 node_ready.go:57] node "addons-167289" has "Ready":"False" status (will retry)
	I1119 01:57:30.963485   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:31.110672   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:31.164676   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:31.164896   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:31.463212   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:31.610250   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:31.665450   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:31.665608   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:31.963675   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:32.111031   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:32.165292   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:32.165516   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:32.463709   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:32.611028   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:32.665061   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:32.665221   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:32.963493   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:33.110769   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:33.164908   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:33.165070   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 01:57:33.221967   15977 node_ready.go:57] node "addons-167289" has "Ready":"False" status (will retry)
	I1119 01:57:33.463228   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:33.610336   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:33.664413   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:33.665316   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:33.963737   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:34.111114   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:34.165498   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:34.165614   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:34.463965   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:34.611197   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:34.665253   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:34.665445   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:34.963678   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:35.110689   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:35.164678   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:35.164878   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:35.464014   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:35.610171   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:35.665385   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:35.665550   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 01:57:35.721231   15977 node_ready.go:57] node "addons-167289" has "Ready":"False" status (will retry)
	I1119 01:57:35.963484   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:36.110698   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:36.164532   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:36.164639   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:36.463752   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:36.610887   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:36.664803   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:36.664997   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:36.963018   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:37.110283   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:37.165310   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:37.165455   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:37.463959   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:37.611063   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:37.665051   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:37.665156   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 01:57:37.721910   15977 node_ready.go:57] node "addons-167289" has "Ready":"False" status (will retry)
	I1119 01:57:37.963104   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:38.110475   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:38.164653   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:38.165388   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:38.464073   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:38.610261   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:38.665258   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:38.665311   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:38.963516   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:39.110739   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:39.164756   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:39.164931   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:39.463094   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:39.610317   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:39.665210   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:39.665367   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 01:57:39.722147   15977 node_ready.go:57] node "addons-167289" has "Ready":"False" status (will retry)
	I1119 01:57:39.963626   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:40.110747   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:40.164890   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:40.165061   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:40.463406   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:40.610609   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:40.664582   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:40.665561   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:40.963365   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:41.110800   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:41.164994   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:41.165106   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:41.463491   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:41.610630   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:41.665043   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:41.665218   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 01:57:41.722254   15977 node_ready.go:57] node "addons-167289" has "Ready":"False" status (will retry)
	I1119 01:57:41.963656   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:42.110908   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:42.165029   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:42.165162   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:42.463778   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:42.610823   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:42.664757   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:42.664991   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:42.963136   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:43.110265   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:43.165221   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:43.165371   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:43.463702   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:43.610741   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:43.665025   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:43.665035   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:43.963237   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:44.110762   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:44.164899   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:44.164994   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 01:57:44.222019   15977 node_ready.go:57] node "addons-167289" has "Ready":"False" status (will retry)
	I1119 01:57:44.463627   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:44.610908   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:44.664888   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:44.664957   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:44.963227   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:45.110342   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:45.165483   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:45.165543   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:45.464086   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:45.611173   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:45.665147   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:45.665279   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:45.963732   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:46.111054   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:46.165101   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:46.165279   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 01:57:46.222315   15977 node_ready.go:57] node "addons-167289" has "Ready":"False" status (will retry)
	I1119 01:57:46.463828   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:46.611142   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:46.665347   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:46.665485   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:46.963534   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:47.110867   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:47.164738   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:47.164923   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:47.463288   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:47.610267   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:47.665198   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:47.665345   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:47.963946   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:48.111217   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:48.165306   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:48.165399   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 01:57:48.222361   15977 node_ready.go:57] node "addons-167289" has "Ready":"False" status (will retry)
	I1119 01:57:48.463720   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:48.610840   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:48.664834   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:48.664957   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:48.964543   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:49.110795   15977 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1119 01:57:49.110815   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:49.168730   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:49.168973   15977 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1119 01:57:49.168988   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:49.222737   15977 node_ready.go:49] node "addons-167289" is "Ready"
	I1119 01:57:49.222773   15977 node_ready.go:38] duration metric: took 41.503395794s for node "addons-167289" to be "Ready" ...
	I1119 01:57:49.222790   15977 api_server.go:52] waiting for apiserver process to appear ...
	I1119 01:57:49.222844   15977 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 01:57:49.238948   15977 api_server.go:72] duration metric: took 42.056477802s to wait for apiserver process to appear ...
	I1119 01:57:49.239024   15977 api_server.go:88] waiting for apiserver healthz status ...
	I1119 01:57:49.239047   15977 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1119 01:57:49.244392   15977 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1119 01:57:49.245489   15977 api_server.go:141] control plane version: v1.34.1
	I1119 01:57:49.245519   15977 api_server.go:131] duration metric: took 6.485232ms to wait for apiserver health ...
	I1119 01:57:49.245529   15977 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 01:57:49.269492   15977 system_pods.go:59] 20 kube-system pods found
	I1119 01:57:49.269534   15977 system_pods.go:61] "amd-gpu-device-plugin-cmmr7" [a0938c28-80c2-4166-8f36-4747dc5172b0] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1119 01:57:49.269555   15977 system_pods.go:61] "coredns-66bc5c9577-xb5hd" [83885124-6b05-4e64-8764-56f1ebccbc5b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 01:57:49.269567   15977 system_pods.go:61] "csi-hostpath-attacher-0" [86eb0765-79ab-4572-b5a8-a93869132f95] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1119 01:57:49.269575   15977 system_pods.go:61] "csi-hostpath-resizer-0" [0ce8e6a4-9cb5-48fd-b476-4e5e2f1d1fef] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1119 01:57:49.269588   15977 system_pods.go:61] "csi-hostpathplugin-m4svl" [eec068d1-82e3-48ab-9138-f9a4fb6e0ec0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1119 01:57:49.269595   15977 system_pods.go:61] "etcd-addons-167289" [2416d7e9-627f-4e54-a742-4e6fa6d16027] Running
	I1119 01:57:49.269602   15977 system_pods.go:61] "kindnet-cf2ws" [dece1234-512d-45b8-84de-da6d63aca86d] Running
	I1119 01:57:49.269616   15977 system_pods.go:61] "kube-apiserver-addons-167289" [16a98896-3b33-4612-9f81-d401f375bc30] Running
	I1119 01:57:49.269623   15977 system_pods.go:61] "kube-controller-manager-addons-167289" [441bdd3c-a38f-4cbc-b00e-f2e40476ac8d] Running
	I1119 01:57:49.269631   15977 system_pods.go:61] "kube-ingress-dns-minikube" [7dfa0119-3fc1-40f7-832e-88c59236450d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1119 01:57:49.269636   15977 system_pods.go:61] "kube-proxy-lrvxh" [1614ffa7-27d9-4b0a-a7c5-3273b69aa8f9] Running
	I1119 01:57:49.269641   15977 system_pods.go:61] "kube-scheduler-addons-167289" [9abbf69d-beb6-4a30-aaeb-a85cca56ad6c] Running
	I1119 01:57:49.269650   15977 system_pods.go:61] "metrics-server-85b7d694d7-j62rx" [a1ece533-9783-484c-94f2-ffb5b35757a1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1119 01:57:49.269661   15977 system_pods.go:61] "nvidia-device-plugin-daemonset-sb8hx" [d9fdbfe8-df6b-4329-ba9d-8ce33b033a74] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1119 01:57:49.269672   15977 system_pods.go:61] "registry-6b586f9694-fvk8h" [c2e887e7-9fa8-44be-baf1-e7067f024b2b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1119 01:57:49.269679   15977 system_pods.go:61] "registry-creds-764b6fb674-85l2k" [4dc568dd-9122-493f-a53a-1829913774ed] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1119 01:57:49.269686   15977 system_pods.go:61] "registry-proxy-7s98h" [c3c89bae-0110-4220-bc88-c3dfb3496f53] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1119 01:57:49.269693   15977 system_pods.go:61] "snapshot-controller-7d9fbc56b8-q5bjz" [a1198c36-b279-486e-98ad-1cd9940f4663] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1119 01:57:49.269708   15977 system_pods.go:61] "snapshot-controller-7d9fbc56b8-qfskz" [5212ea77-460c-423c-a29e-832bbc94b1d2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1119 01:57:49.269715   15977 system_pods.go:61] "storage-provisioner" [a7872dc1-9361-4083-b435-f50ea958395a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 01:57:49.269722   15977 system_pods.go:74] duration metric: took 24.185941ms to wait for pod list to return data ...
	I1119 01:57:49.269732   15977 default_sa.go:34] waiting for default service account to be created ...
	I1119 01:57:49.274170   15977 default_sa.go:45] found service account: "default"
	I1119 01:57:49.274195   15977 default_sa.go:55] duration metric: took 4.456926ms for default service account to be created ...
	I1119 01:57:49.274205   15977 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 01:57:49.373456   15977 system_pods.go:86] 20 kube-system pods found
	I1119 01:57:49.373501   15977 system_pods.go:89] "amd-gpu-device-plugin-cmmr7" [a0938c28-80c2-4166-8f36-4747dc5172b0] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1119 01:57:49.373514   15977 system_pods.go:89] "coredns-66bc5c9577-xb5hd" [83885124-6b05-4e64-8764-56f1ebccbc5b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 01:57:49.373524   15977 system_pods.go:89] "csi-hostpath-attacher-0" [86eb0765-79ab-4572-b5a8-a93869132f95] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1119 01:57:49.373533   15977 system_pods.go:89] "csi-hostpath-resizer-0" [0ce8e6a4-9cb5-48fd-b476-4e5e2f1d1fef] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1119 01:57:49.373550   15977 system_pods.go:89] "csi-hostpathplugin-m4svl" [eec068d1-82e3-48ab-9138-f9a4fb6e0ec0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1119 01:57:49.373560   15977 system_pods.go:89] "etcd-addons-167289" [2416d7e9-627f-4e54-a742-4e6fa6d16027] Running
	I1119 01:57:49.373569   15977 system_pods.go:89] "kindnet-cf2ws" [dece1234-512d-45b8-84de-da6d63aca86d] Running
	I1119 01:57:49.373576   15977 system_pods.go:89] "kube-apiserver-addons-167289" [16a98896-3b33-4612-9f81-d401f375bc30] Running
	I1119 01:57:49.373582   15977 system_pods.go:89] "kube-controller-manager-addons-167289" [441bdd3c-a38f-4cbc-b00e-f2e40476ac8d] Running
	I1119 01:57:49.373590   15977 system_pods.go:89] "kube-ingress-dns-minikube" [7dfa0119-3fc1-40f7-832e-88c59236450d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1119 01:57:49.373596   15977 system_pods.go:89] "kube-proxy-lrvxh" [1614ffa7-27d9-4b0a-a7c5-3273b69aa8f9] Running
	I1119 01:57:49.373603   15977 system_pods.go:89] "kube-scheduler-addons-167289" [9abbf69d-beb6-4a30-aaeb-a85cca56ad6c] Running
	I1119 01:57:49.373620   15977 system_pods.go:89] "metrics-server-85b7d694d7-j62rx" [a1ece533-9783-484c-94f2-ffb5b35757a1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1119 01:57:49.373632   15977 system_pods.go:89] "nvidia-device-plugin-daemonset-sb8hx" [d9fdbfe8-df6b-4329-ba9d-8ce33b033a74] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1119 01:57:49.373641   15977 system_pods.go:89] "registry-6b586f9694-fvk8h" [c2e887e7-9fa8-44be-baf1-e7067f024b2b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1119 01:57:49.373649   15977 system_pods.go:89] "registry-creds-764b6fb674-85l2k" [4dc568dd-9122-493f-a53a-1829913774ed] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1119 01:57:49.373695   15977 system_pods.go:89] "registry-proxy-7s98h" [c3c89bae-0110-4220-bc88-c3dfb3496f53] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1119 01:57:49.373709   15977 system_pods.go:89] "snapshot-controller-7d9fbc56b8-q5bjz" [a1198c36-b279-486e-98ad-1cd9940f4663] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1119 01:57:49.373719   15977 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qfskz" [5212ea77-460c-423c-a29e-832bbc94b1d2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1119 01:57:49.373729   15977 system_pods.go:89] "storage-provisioner" [a7872dc1-9361-4083-b435-f50ea958395a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 01:57:49.373751   15977 retry.go:31] will retry after 262.937655ms: missing components: kube-dns
	I1119 01:57:49.467536   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:49.610924   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:49.641140   15977 system_pods.go:86] 20 kube-system pods found
	I1119 01:57:49.641182   15977 system_pods.go:89] "amd-gpu-device-plugin-cmmr7" [a0938c28-80c2-4166-8f36-4747dc5172b0] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1119 01:57:49.641193   15977 system_pods.go:89] "coredns-66bc5c9577-xb5hd" [83885124-6b05-4e64-8764-56f1ebccbc5b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 01:57:49.641203   15977 system_pods.go:89] "csi-hostpath-attacher-0" [86eb0765-79ab-4572-b5a8-a93869132f95] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1119 01:57:49.641211   15977 system_pods.go:89] "csi-hostpath-resizer-0" [0ce8e6a4-9cb5-48fd-b476-4e5e2f1d1fef] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1119 01:57:49.641219   15977 system_pods.go:89] "csi-hostpathplugin-m4svl" [eec068d1-82e3-48ab-9138-f9a4fb6e0ec0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1119 01:57:49.641231   15977 system_pods.go:89] "etcd-addons-167289" [2416d7e9-627f-4e54-a742-4e6fa6d16027] Running
	I1119 01:57:49.641238   15977 system_pods.go:89] "kindnet-cf2ws" [dece1234-512d-45b8-84de-da6d63aca86d] Running
	I1119 01:57:49.641243   15977 system_pods.go:89] "kube-apiserver-addons-167289" [16a98896-3b33-4612-9f81-d401f375bc30] Running
	I1119 01:57:49.641248   15977 system_pods.go:89] "kube-controller-manager-addons-167289" [441bdd3c-a38f-4cbc-b00e-f2e40476ac8d] Running
	I1119 01:57:49.641258   15977 system_pods.go:89] "kube-ingress-dns-minikube" [7dfa0119-3fc1-40f7-832e-88c59236450d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1119 01:57:49.641271   15977 system_pods.go:89] "kube-proxy-lrvxh" [1614ffa7-27d9-4b0a-a7c5-3273b69aa8f9] Running
	I1119 01:57:49.641277   15977 system_pods.go:89] "kube-scheduler-addons-167289" [9abbf69d-beb6-4a30-aaeb-a85cca56ad6c] Running
	I1119 01:57:49.641284   15977 system_pods.go:89] "metrics-server-85b7d694d7-j62rx" [a1ece533-9783-484c-94f2-ffb5b35757a1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1119 01:57:49.641292   15977 system_pods.go:89] "nvidia-device-plugin-daemonset-sb8hx" [d9fdbfe8-df6b-4329-ba9d-8ce33b033a74] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1119 01:57:49.641301   15977 system_pods.go:89] "registry-6b586f9694-fvk8h" [c2e887e7-9fa8-44be-baf1-e7067f024b2b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1119 01:57:49.641309   15977 system_pods.go:89] "registry-creds-764b6fb674-85l2k" [4dc568dd-9122-493f-a53a-1829913774ed] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1119 01:57:49.641318   15977 system_pods.go:89] "registry-proxy-7s98h" [c3c89bae-0110-4220-bc88-c3dfb3496f53] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1119 01:57:49.641327   15977 system_pods.go:89] "snapshot-controller-7d9fbc56b8-q5bjz" [a1198c36-b279-486e-98ad-1cd9940f4663] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1119 01:57:49.641339   15977 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qfskz" [5212ea77-460c-423c-a29e-832bbc94b1d2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1119 01:57:49.641346   15977 system_pods.go:89] "storage-provisioner" [a7872dc1-9361-4083-b435-f50ea958395a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 01:57:49.641369   15977 retry.go:31] will retry after 249.569512ms: missing components: kube-dns
	I1119 01:57:49.665370   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:49.665456   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:49.895399   15977 system_pods.go:86] 20 kube-system pods found
	I1119 01:57:49.895451   15977 system_pods.go:89] "amd-gpu-device-plugin-cmmr7" [a0938c28-80c2-4166-8f36-4747dc5172b0] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1119 01:57:49.895458   15977 system_pods.go:89] "coredns-66bc5c9577-xb5hd" [83885124-6b05-4e64-8764-56f1ebccbc5b] Running
	I1119 01:57:49.895465   15977 system_pods.go:89] "csi-hostpath-attacher-0" [86eb0765-79ab-4572-b5a8-a93869132f95] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1119 01:57:49.895471   15977 system_pods.go:89] "csi-hostpath-resizer-0" [0ce8e6a4-9cb5-48fd-b476-4e5e2f1d1fef] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1119 01:57:49.895476   15977 system_pods.go:89] "csi-hostpathplugin-m4svl" [eec068d1-82e3-48ab-9138-f9a4fb6e0ec0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1119 01:57:49.895482   15977 system_pods.go:89] "etcd-addons-167289" [2416d7e9-627f-4e54-a742-4e6fa6d16027] Running
	I1119 01:57:49.895486   15977 system_pods.go:89] "kindnet-cf2ws" [dece1234-512d-45b8-84de-da6d63aca86d] Running
	I1119 01:57:49.895489   15977 system_pods.go:89] "kube-apiserver-addons-167289" [16a98896-3b33-4612-9f81-d401f375bc30] Running
	I1119 01:57:49.895495   15977 system_pods.go:89] "kube-controller-manager-addons-167289" [441bdd3c-a38f-4cbc-b00e-f2e40476ac8d] Running
	I1119 01:57:49.895502   15977 system_pods.go:89] "kube-ingress-dns-minikube" [7dfa0119-3fc1-40f7-832e-88c59236450d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1119 01:57:49.895505   15977 system_pods.go:89] "kube-proxy-lrvxh" [1614ffa7-27d9-4b0a-a7c5-3273b69aa8f9] Running
	I1119 01:57:49.895509   15977 system_pods.go:89] "kube-scheduler-addons-167289" [9abbf69d-beb6-4a30-aaeb-a85cca56ad6c] Running
	I1119 01:57:49.895515   15977 system_pods.go:89] "metrics-server-85b7d694d7-j62rx" [a1ece533-9783-484c-94f2-ffb5b35757a1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1119 01:57:49.895523   15977 system_pods.go:89] "nvidia-device-plugin-daemonset-sb8hx" [d9fdbfe8-df6b-4329-ba9d-8ce33b033a74] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1119 01:57:49.895539   15977 system_pods.go:89] "registry-6b586f9694-fvk8h" [c2e887e7-9fa8-44be-baf1-e7067f024b2b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1119 01:57:49.895546   15977 system_pods.go:89] "registry-creds-764b6fb674-85l2k" [4dc568dd-9122-493f-a53a-1829913774ed] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1119 01:57:49.895552   15977 system_pods.go:89] "registry-proxy-7s98h" [c3c89bae-0110-4220-bc88-c3dfb3496f53] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1119 01:57:49.895558   15977 system_pods.go:89] "snapshot-controller-7d9fbc56b8-q5bjz" [a1198c36-b279-486e-98ad-1cd9940f4663] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1119 01:57:49.895567   15977 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qfskz" [5212ea77-460c-423c-a29e-832bbc94b1d2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1119 01:57:49.895570   15977 system_pods.go:89] "storage-provisioner" [a7872dc1-9361-4083-b435-f50ea958395a] Running
	I1119 01:57:49.895580   15977 system_pods.go:126] duration metric: took 621.368616ms to wait for k8s-apps to be running ...
	I1119 01:57:49.895587   15977 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 01:57:49.895628   15977 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 01:57:49.908081   15977 system_svc.go:56] duration metric: took 12.487096ms WaitForService to wait for kubelet
	I1119 01:57:49.908110   15977 kubeadm.go:587] duration metric: took 42.725642841s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 01:57:49.908131   15977 node_conditions.go:102] verifying NodePressure condition ...
	I1119 01:57:49.910455   15977 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1119 01:57:49.910482   15977 node_conditions.go:123] node cpu capacity is 8
	I1119 01:57:49.910499   15977 node_conditions.go:105] duration metric: took 2.362295ms to run NodePressure ...
	I1119 01:57:49.910512   15977 start.go:242] waiting for startup goroutines ...
	I1119 01:57:49.963595   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:50.111951   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:50.166020   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:50.166088   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:50.463702   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:50.611593   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:50.712078   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:50.712734   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:50.964206   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:51.111981   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:51.165930   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:51.166097   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:51.464624   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:51.611756   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:51.665405   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:51.665464   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:51.964230   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:52.111212   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:52.165838   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:52.165956   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:52.463648   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:52.611908   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:52.665408   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:52.665489   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:52.964724   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:53.111562   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:53.164740   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:53.165691   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:53.464682   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:53.611777   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:53.665315   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:53.665448   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:53.964453   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:54.111655   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:54.165615   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:54.165708   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:54.463774   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:54.611312   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:54.666137   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:54.666138   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:54.964322   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:55.112766   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:55.170109   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:55.170230   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:55.467613   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:55.611729   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:55.665898   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:55.666365   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:55.964290   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:56.110977   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:56.165497   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:56.165548   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:56.464482   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:56.611594   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:56.664834   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:56.665873   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:56.963588   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:57.111084   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:57.165663   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:57.165678   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:57.463297   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:57.610592   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:57.664422   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:57.665501   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:57.964250   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:58.111019   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:58.165606   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:58.165645   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:58.464339   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:58.611288   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:58.665681   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:58.665728   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:58.964252   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:59.110663   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:59.165140   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:59.165264   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:59.464506   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:59.611494   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:59.665072   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:59.665760   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:59.964686   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:00.111387   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:00.165551   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:00.165593   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:00.464576   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:00.611598   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:00.665514   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:00.665848   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:00.963235   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:01.111011   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:01.165042   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:01.165087   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:01.463701   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:01.611591   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:01.665140   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:01.666907   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:01.963981   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:02.112028   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:02.165557   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:02.165625   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:02.464194   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:02.629052   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:02.743354   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:02.743695   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:02.963890   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:03.112163   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:03.165860   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:03.165877   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:03.463720   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:03.611768   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:03.665221   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:03.665595   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:03.963566   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:04.112087   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:04.165820   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:04.165927   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:04.463659   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:04.611603   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:04.711537   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:04.711738   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:04.965008   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:05.112580   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:05.165242   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:05.165331   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:05.474737   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:05.611526   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:05.664619   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:05.665450   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:05.964000   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:06.111050   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:06.165497   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:06.165567   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:06.464321   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:06.610972   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:06.710884   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:06.711074   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:06.964246   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:07.111460   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:07.166032   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:07.166101   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:07.463384   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:07.610983   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:07.664710   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:07.664833   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:07.963369   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:08.110942   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:08.165567   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:08.165608   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:08.463325   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:08.610856   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:08.664703   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:08.664791   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:08.963392   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:09.110909   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:09.164884   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:09.165073   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:09.463632   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:09.611746   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:09.665294   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:09.665356   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:09.965116   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:10.111324   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:10.165657   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:10.165697   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:10.463975   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:10.610983   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:10.665715   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:10.665712   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:10.964107   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:11.111976   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:11.165476   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:11.165498   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:11.463270   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:11.610992   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:11.665147   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:11.665219   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:11.963404   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:12.111184   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:12.165195   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:12.165238   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:12.463928   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:12.612111   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:12.665682   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:12.665682   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:12.964368   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:13.111027   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:13.165625   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:13.165774   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:13.464655   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:13.611923   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:13.665005   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:13.665122   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:13.963617   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:14.111406   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:14.166173   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:14.166233   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:14.463939   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:14.611841   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:14.664849   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:14.664967   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:14.965506   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:15.112555   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:15.166966   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:15.167019   15977 kapi.go:107] duration metric: took 1m6.504737546s to wait for kubernetes.io/minikube-addons=registry ...
	I1119 01:58:15.463702   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:15.670477   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:15.670550   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:16.023501   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:16.111088   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:16.165712   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:16.463912   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:16.611944   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:16.665540   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:16.967112   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:17.111537   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:17.166223   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:17.487293   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:17.652009   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:17.665117   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:17.963679   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:18.111449   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:18.211922   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:18.463733   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:18.611145   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:18.665882   15977 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:18.963225   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:19.110947   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:19.165390   15977 kapi.go:107] duration metric: took 1m10.502663642s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1119 01:58:19.465330   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:19.611237   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:19.964698   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:20.111780   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:20.464075   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:20.663214   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:20.963974   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:21.111960   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:21.463665   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:21.612645   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:21.964016   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:22.110522   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:22.464327   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:22.611146   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:22.964400   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:23.111314   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:23.464333   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:23.611269   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:23.964354   15977 kapi.go:107] duration metric: took 1m8.503523861s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1119 01:58:23.966110   15977 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-167289 cluster.
	I1119 01:58:23.967865   15977 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1119 01:58:23.969091   15977 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1119 01:58:24.111774   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:24.610906   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:25.111503   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:25.610686   15977 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:26.111825   15977 kapi.go:107] duration metric: took 1m17.003833892s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1119 01:58:26.113638   15977 out.go:179] * Enabled addons: cloud-spanner, ingress-dns, inspektor-gadget, storage-provisioner, nvidia-device-plugin, registry-creds, amd-gpu-device-plugin, yakd, default-storageclass, metrics-server, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1119 01:58:26.114696   15977 addons.go:515] duration metric: took 1m18.932211525s for enable addons: enabled=[cloud-spanner ingress-dns inspektor-gadget storage-provisioner nvidia-device-plugin registry-creds amd-gpu-device-plugin yakd default-storageclass metrics-server volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1119 01:58:26.114727   15977 start.go:247] waiting for cluster config update ...
	I1119 01:58:26.114744   15977 start.go:256] writing updated cluster config ...
	I1119 01:58:26.115011   15977 ssh_runner.go:195] Run: rm -f paused
	I1119 01:58:26.118723   15977 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 01:58:26.121356   15977 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-xb5hd" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 01:58:26.124800   15977 pod_ready.go:94] pod "coredns-66bc5c9577-xb5hd" is "Ready"
	I1119 01:58:26.124821   15977 pod_ready.go:86] duration metric: took 3.443331ms for pod "coredns-66bc5c9577-xb5hd" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 01:58:26.126273   15977 pod_ready.go:83] waiting for pod "etcd-addons-167289" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 01:58:26.129356   15977 pod_ready.go:94] pod "etcd-addons-167289" is "Ready"
	I1119 01:58:26.129372   15977 pod_ready.go:86] duration metric: took 3.082682ms for pod "etcd-addons-167289" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 01:58:26.130847   15977 pod_ready.go:83] waiting for pod "kube-apiserver-addons-167289" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 01:58:26.133885   15977 pod_ready.go:94] pod "kube-apiserver-addons-167289" is "Ready"
	I1119 01:58:26.133903   15977 pod_ready.go:86] duration metric: took 3.040935ms for pod "kube-apiserver-addons-167289" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 01:58:26.135403   15977 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-167289" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 01:58:26.521511   15977 pod_ready.go:94] pod "kube-controller-manager-addons-167289" is "Ready"
	I1119 01:58:26.521542   15977 pod_ready.go:86] duration metric: took 386.123241ms for pod "kube-controller-manager-addons-167289" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 01:58:26.721859   15977 pod_ready.go:83] waiting for pod "kube-proxy-lrvxh" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 01:58:27.121901   15977 pod_ready.go:94] pod "kube-proxy-lrvxh" is "Ready"
	I1119 01:58:27.121928   15977 pod_ready.go:86] duration metric: took 400.043351ms for pod "kube-proxy-lrvxh" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 01:58:27.322885   15977 pod_ready.go:83] waiting for pod "kube-scheduler-addons-167289" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 01:58:27.721338   15977 pod_ready.go:94] pod "kube-scheduler-addons-167289" is "Ready"
	I1119 01:58:27.721363   15977 pod_ready.go:86] duration metric: took 398.453182ms for pod "kube-scheduler-addons-167289" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 01:58:27.721374   15977 pod_ready.go:40] duration metric: took 1.602627462s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 01:58:27.762809   15977 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1119 01:58:27.765472   15977 out.go:179] * Done! kubectl is now configured to use "addons-167289" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 19 01:58:25 addons-167289 crio[778]: time="2025-11-19T01:58:25.4767929Z" level=info msg="Starting container: 2b3c875b37c34a1aaf8e79a105e8e53fae42de67a4a4a839e6099e2e76e3ee93" id=37aeb6fb-7185-4a77-9347-ea978d0be1e9 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 01:58:25 addons-167289 crio[778]: time="2025-11-19T01:58:25.479522784Z" level=info msg="Started container" PID=6140 containerID=2b3c875b37c34a1aaf8e79a105e8e53fae42de67a4a4a839e6099e2e76e3ee93 description=kube-system/csi-hostpathplugin-m4svl/csi-snapshotter id=37aeb6fb-7185-4a77-9347-ea978d0be1e9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=17fd98ac962b98642071ee55b223ccfdb5c0badadf234a7d5df1d8487353fdf5
	Nov 19 01:58:28 addons-167289 crio[778]: time="2025-11-19T01:58:28.58057163Z" level=info msg="Running pod sandbox: default/busybox/POD" id=ea41ba5c-9484-4d2b-8f34-953ff14b29fd name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 01:58:28 addons-167289 crio[778]: time="2025-11-19T01:58:28.580656042Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 01:58:28 addons-167289 crio[778]: time="2025-11-19T01:58:28.587082077Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:d47ff407d14aefc5d26c1691dd8519083a1e43533d047919d5a1ab62a2ca6707 UID:d4b4f227-0052-445f-a84d-de63013a9d7f NetNS:/var/run/netns/3bdb5207-0629-4c43-8bb1-5682f473e3fe Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008ac90}] Aliases:map[]}"
	Nov 19 01:58:28 addons-167289 crio[778]: time="2025-11-19T01:58:28.587123192Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 19 01:58:28 addons-167289 crio[778]: time="2025-11-19T01:58:28.596633522Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:d47ff407d14aefc5d26c1691dd8519083a1e43533d047919d5a1ab62a2ca6707 UID:d4b4f227-0052-445f-a84d-de63013a9d7f NetNS:/var/run/netns/3bdb5207-0629-4c43-8bb1-5682f473e3fe Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008ac90}] Aliases:map[]}"
	Nov 19 01:58:28 addons-167289 crio[778]: time="2025-11-19T01:58:28.596751497Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 19 01:58:28 addons-167289 crio[778]: time="2025-11-19T01:58:28.597495545Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 19 01:58:28 addons-167289 crio[778]: time="2025-11-19T01:58:28.598187504Z" level=info msg="Ran pod sandbox d47ff407d14aefc5d26c1691dd8519083a1e43533d047919d5a1ab62a2ca6707 with infra container: default/busybox/POD" id=ea41ba5c-9484-4d2b-8f34-953ff14b29fd name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 01:58:28 addons-167289 crio[778]: time="2025-11-19T01:58:28.599260401Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=304008ca-f8d8-439a-bc0b-3eb97fc8504d name=/runtime.v1.ImageService/ImageStatus
	Nov 19 01:58:28 addons-167289 crio[778]: time="2025-11-19T01:58:28.59935855Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=304008ca-f8d8-439a-bc0b-3eb97fc8504d name=/runtime.v1.ImageService/ImageStatus
	Nov 19 01:58:28 addons-167289 crio[778]: time="2025-11-19T01:58:28.599389166Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=304008ca-f8d8-439a-bc0b-3eb97fc8504d name=/runtime.v1.ImageService/ImageStatus
	Nov 19 01:58:28 addons-167289 crio[778]: time="2025-11-19T01:58:28.599918291Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c3e23b25-4e53-4ea8-8654-2596a2b9f4e3 name=/runtime.v1.ImageService/PullImage
	Nov 19 01:58:28 addons-167289 crio[778]: time="2025-11-19T01:58:28.601191066Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 19 01:58:29 addons-167289 crio[778]: time="2025-11-19T01:58:29.19187954Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=c3e23b25-4e53-4ea8-8654-2596a2b9f4e3 name=/runtime.v1.ImageService/PullImage
	Nov 19 01:58:29 addons-167289 crio[778]: time="2025-11-19T01:58:29.192395159Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0e2a6877-226c-4f34-beee-133c6c1731fe name=/runtime.v1.ImageService/ImageStatus
	Nov 19 01:58:29 addons-167289 crio[778]: time="2025-11-19T01:58:29.193612393Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=96b31a88-6f3e-41e1-8687-5f754870511a name=/runtime.v1.ImageService/ImageStatus
	Nov 19 01:58:29 addons-167289 crio[778]: time="2025-11-19T01:58:29.196572001Z" level=info msg="Creating container: default/busybox/busybox" id=e6eb10ae-ed0e-4a7d-b725-22be3ed5fd7c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 01:58:29 addons-167289 crio[778]: time="2025-11-19T01:58:29.196682922Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 01:58:29 addons-167289 crio[778]: time="2025-11-19T01:58:29.202823952Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 01:58:29 addons-167289 crio[778]: time="2025-11-19T01:58:29.203283744Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 01:58:29 addons-167289 crio[778]: time="2025-11-19T01:58:29.232728697Z" level=info msg="Created container f400e2b2f34081e5eff07b080a41ad432c061312cab0b1a97d9b1a12dda254ab: default/busybox/busybox" id=e6eb10ae-ed0e-4a7d-b725-22be3ed5fd7c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 01:58:29 addons-167289 crio[778]: time="2025-11-19T01:58:29.233221383Z" level=info msg="Starting container: f400e2b2f34081e5eff07b080a41ad432c061312cab0b1a97d9b1a12dda254ab" id=489770ab-c95e-4d75-96ca-cd85bc5d00f2 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 01:58:29 addons-167289 crio[778]: time="2025-11-19T01:58:29.235260843Z" level=info msg="Started container" PID=6254 containerID=f400e2b2f34081e5eff07b080a41ad432c061312cab0b1a97d9b1a12dda254ab description=default/busybox/busybox id=489770ab-c95e-4d75-96ca-cd85bc5d00f2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d47ff407d14aefc5d26c1691dd8519083a1e43533d047919d5a1ab62a2ca6707
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	f400e2b2f3408       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          8 seconds ago        Running             busybox                                  0                   d47ff407d14ae       busybox                                    default
	2b3c875b37c34       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          12 seconds ago       Running             csi-snapshotter                          0                   17fd98ac962b9       csi-hostpathplugin-m4svl                   kube-system
	0057cb6b6d59c       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          13 seconds ago       Running             csi-provisioner                          0                   17fd98ac962b9       csi-hostpathplugin-m4svl                   kube-system
	387526d34b521       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            14 seconds ago       Running             liveness-probe                           0                   17fd98ac962b9       csi-hostpathplugin-m4svl                   kube-system
	559ecf78b7f29       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 14 seconds ago       Running             gcp-auth                                 0                   b97d86d0fe88e       gcp-auth-78565c9fb4-6lrls                  gcp-auth
	f44d066a2880c       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           16 seconds ago       Running             hostpath                                 0                   17fd98ac962b9       csi-hostpathplugin-m4svl                   kube-system
	a1f91b84fd835       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:9a12b3c1d155bb081ff408a9b6c1cec18573c967e0c3917225b81ffe11c0b7f2                            16 seconds ago       Running             gadget                                   0                   a94a5c3a36bac       gadget-qm258                               gadget
	d46baa577b02a       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                19 seconds ago       Running             node-driver-registrar                    0                   17fd98ac962b9       csi-hostpathplugin-m4svl                   kube-system
	1842b647980c9       registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27                             19 seconds ago       Running             controller                               0                   9683973b7fa45       ingress-nginx-controller-6c8bf45fb-89mcj   ingress-nginx
	a9414b80efa2e       884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45                                                                             23 seconds ago       Exited              patch                                    2                   121072764b6b6       gcp-auth-certs-patch-hcqrv                 gcp-auth
	3e7307111a0a7       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              23 seconds ago       Running             registry-proxy                           0                   3fd0bb1826b44       registry-proxy-7s98h                       kube-system
	e4525045db437       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     24 seconds ago       Running             amd-gpu-device-plugin                    0                   9486566ca191c       amd-gpu-device-plugin-cmmr7                kube-system
	5a9b09cdb5c24       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   25 seconds ago       Exited              create                                   0                   cebe50520f692       gcp-auth-certs-create-xzwbz                gcp-auth
	320316320c36a       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      26 seconds ago       Running             volume-snapshot-controller               0                   9c7d9458f082c       snapshot-controller-7d9fbc56b8-qfskz       kube-system
	63f0ac7c6e1d6       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   26 seconds ago       Exited              patch                                    0                   c6461a47f85ea       ingress-nginx-admission-patch-mq2v2        ingress-nginx
	77230f6072332       nvcr.io/nvidia/k8s-device-plugin@sha256:20db699f1480b6f37423cab909e9c6df5a4fdbd981b405e0d72f00a86fee5100                                     26 seconds ago       Running             nvidia-device-plugin-ctr                 0                   4fcb6eb4e7c9b       nvidia-device-plugin-daemonset-sb8hx       kube-system
	08a4a837401d8       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             29 seconds ago       Running             local-path-provisioner                   0                   155a009cec772       local-path-provisioner-648f6765c9-sjqfv    local-path-storage
	4c4521da22d2e       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      30 seconds ago       Running             volume-snapshot-controller               0                   d811783c895d2       snapshot-controller-7d9fbc56b8-q5bjz       kube-system
	6c5d7a569a83a       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   31 seconds ago       Running             csi-external-health-monitor-controller   0                   17fd98ac962b9       csi-hostpathplugin-m4svl                   kube-system
	c45598982d3b3       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              32 seconds ago       Running             csi-resizer                              0                   3b1ba8a6f3db0       csi-hostpath-resizer-0                     kube-system
	fc07b5bfc1438       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             33 seconds ago       Running             csi-attacher                             0                   c9152a04f0679       csi-hostpath-attacher-0                    kube-system
	d1cb956fc89aa       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              34 seconds ago       Running             yakd                                     0                   c964bc05a1b7c       yakd-dashboard-5ff678cb9-lfwjh             yakd-dashboard
	f8b7afaf360ea       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   37 seconds ago       Exited              create                                   0                   02971c980d802       ingress-nginx-admission-create-7868s       ingress-nginx
	ee1592f353982       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        38 seconds ago       Running             metrics-server                           0                   f3cbb42e42ced       metrics-server-85b7d694d7-j62rx            kube-system
	139e05f21703a       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           39 seconds ago       Running             registry                                 0                   17556ddd5ea3f       registry-6b586f9694-fvk8h                  kube-system
	ffeca55fd50eb       gcr.io/cloud-spanner-emulator/emulator@sha256:7360f5c5ff4b89d75592d9585fc2d59d207b08ccf262a84edfe79ee0613a7099                               40 seconds ago       Running             cloud-spanner-emulator                   0                   d66fe0f14beb7       cloud-spanner-emulator-6f9fcf858b-2s48m    default
	28a0d1d0eb9de       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               43 seconds ago       Running             minikube-ingress-dns                     0                   fe5bbac44845e       kube-ingress-dns-minikube                  kube-system
	2d72765f224ed       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             48 seconds ago       Running             storage-provisioner                      0                   e4aa1d9caef11       storage-provisioner                        kube-system
	4f2a8fdefa3a9       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             48 seconds ago       Running             coredns                                  0                   de48e2240e120       coredns-66bc5c9577-xb5hd                   kube-system
	fc26589821a5b       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             About a minute ago   Running             kindnet-cni                              0                   d725d176769a0       kindnet-cf2ws                              kube-system
	76265018a97b0       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             About a minute ago   Running             kube-proxy                               0                   59ec2f47a03ed       kube-proxy-lrvxh                           kube-system
	2c19d6084be53       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             About a minute ago   Running             kube-apiserver                           0                   a8129147aa62b       kube-apiserver-addons-167289               kube-system
	caf07801af8b3       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             About a minute ago   Running             kube-controller-manager                  0                   833dcc9e71021       kube-controller-manager-addons-167289      kube-system
	32f9d499f63a2       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             About a minute ago   Running             kube-scheduler                           0                   43bd61d555e0f       kube-scheduler-addons-167289               kube-system
	c9553756abec4       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             About a minute ago   Running             etcd                                     0                   1476ade767365       etcd-addons-167289                         kube-system
	
	
	==> coredns [4f2a8fdefa3a921e57d39ce31bcba0663aef48e6cc812fddf8202cddb72408bc] <==
	[INFO] 10.244.0.19:47448 - 14634 "AAAA IN registry.kube-system.svc.cluster.local.local. udp 62 false 512" NXDOMAIN qr,rd,ra 62 0.003097853s
	[INFO] 10.244.0.19:57698 - 13432 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,aa,rd,ra 204 0.000066676s
	[INFO] 10.244.0.19:57698 - 13783 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,aa,rd,ra 204 0.000133757s
	[INFO] 10.244.0.19:54745 - 48043 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000080997s
	[INFO] 10.244.0.19:54745 - 47826 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000110404s
	[INFO] 10.244.0.19:56540 - 41744 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000055138s
	[INFO] 10.244.0.19:56540 - 41548 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000091285s
	[INFO] 10.244.0.19:48061 - 20653 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000097963s
	[INFO] 10.244.0.19:48061 - 20250 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000129224s
	[INFO] 10.244.0.22:56567 - 57906 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000167784s
	[INFO] 10.244.0.22:38552 - 45955 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000200663s
	[INFO] 10.244.0.22:49311 - 15106 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000101873s
	[INFO] 10.244.0.22:50831 - 32489 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000162313s
	[INFO] 10.244.0.22:38662 - 5847 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000114795s
	[INFO] 10.244.0.22:34012 - 38999 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000155233s
	[INFO] 10.244.0.22:42856 - 4201 "A IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.003841475s
	[INFO] 10.244.0.22:59887 - 36048 "AAAA IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.005227468s
	[INFO] 10.244.0.22:38355 - 63225 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.004972997s
	[INFO] 10.244.0.22:43527 - 49723 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.00507724s
	[INFO] 10.244.0.22:44622 - 5488 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005057779s
	[INFO] 10.244.0.22:55446 - 28720 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005287485s
	[INFO] 10.244.0.22:38301 - 1892 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005165863s
	[INFO] 10.244.0.22:51183 - 5397 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.007314785s
	[INFO] 10.244.0.22:58154 - 54417 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000755726s
	[INFO] 10.244.0.22:60816 - 43483 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.00112143s
	
	
	==> describe nodes <==
	Name:               addons-167289
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-167289
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ebd49efe8b7d44f89fc96e21beffcd64dfee8277
	                    minikube.k8s.io/name=addons-167289
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T01_57_02_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-167289
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-167289"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 01:56:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-167289
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 01:58:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 01:58:32 +0000   Wed, 19 Nov 2025 01:56:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 01:58:32 +0000   Wed, 19 Nov 2025 01:56:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 01:58:32 +0000   Wed, 19 Nov 2025 01:56:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 01:58:32 +0000   Wed, 19 Nov 2025 01:57:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-167289
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863340Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863340Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf10fb2f940d419c1d138723691cfee8
	  System UUID:                78702d16-7ec5-4b22-9678-f0ef333e8730
	  Boot ID:                    b74e7f3d-91af-4c1b-82f6-1745dfd2e678
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  default                     cloud-spanner-emulator-6f9fcf858b-2s48m     0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  gadget                      gadget-qm258                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  gcp-auth                    gcp-auth-78565c9fb4-6lrls                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         83s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-89mcj    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         90s
	  kube-system                 amd-gpu-device-plugin-cmmr7                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kube-system                 coredns-66bc5c9577-xb5hd                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     91s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 csi-hostpathplugin-m4svl                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kube-system                 etcd-addons-167289                          100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         97s
	  kube-system                 kindnet-cf2ws                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      92s
	  kube-system                 kube-apiserver-addons-167289                250m (3%)     0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 kube-controller-manager-addons-167289       200m (2%)     0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 kube-proxy-lrvxh                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 kube-scheduler-addons-167289                100m (1%)     0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 metrics-server-85b7d694d7-j62rx             100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         90s
	  kube-system                 nvidia-device-plugin-daemonset-sb8hx        0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kube-system                 registry-6b586f9694-fvk8h                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 registry-creds-764b6fb674-85l2k             0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 registry-proxy-7s98h                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kube-system                 snapshot-controller-7d9fbc56b8-q5bjz        0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 snapshot-controller-7d9fbc56b8-qfskz        0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  local-path-storage          local-path-provisioner-648f6765c9-sjqfv     0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-lfwjh              0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     90s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 89s                  kube-proxy       
	  Normal  Starting                 102s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  102s (x8 over 102s)  kubelet          Node addons-167289 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    102s (x8 over 102s)  kubelet          Node addons-167289 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     102s (x8 over 102s)  kubelet          Node addons-167289 status is now: NodeHasSufficientPID
	  Normal  Starting                 97s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  97s                  kubelet          Node addons-167289 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    97s                  kubelet          Node addons-167289 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     97s                  kubelet          Node addons-167289 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           92s                  node-controller  Node addons-167289 event: Registered Node addons-167289 in Controller
	  Normal  NodeReady                50s                  kubelet          Node addons-167289 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov19 01:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.000892] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.000998] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084009] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.382912] i8042: Warning: Keylock active
	[  +0.007740] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.481610] block sda: the capability attribute has been deprecated.
	[  +0.087110] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023778] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.840612] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [c9553756abec4455cfb96c8ccb4ac24ebed143c24ffb9c08df7706240e482bac] <==
	{"level":"warn","ts":"2025-11-19T01:56:58.597625Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T01:56:58.603973Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T01:56:58.610027Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T01:56:58.616271Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T01:56:58.623025Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T01:56:58.629701Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38478","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T01:56:58.636152Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T01:56:58.641510Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T01:56:58.657797Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T01:56:58.663457Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T01:56:58.669417Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T01:56:58.715513Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T01:57:09.473180Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T01:57:09.479299Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T01:57:36.112333Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T01:57:36.119205Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T01:57:36.141753Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57256","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-19T01:58:02.626611Z","caller":"traceutil/trace.go:172","msg":"trace[1512337794] transaction","detail":"{read_only:false; response_revision:1039; number_of_response:1; }","duration":"123.6037ms","start":"2025-11-19T01:58:02.502989Z","end":"2025-11-19T01:58:02.626592Z","steps":["trace[1512337794] 'process raft request'  (duration: 123.417566ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T01:58:15.949802Z","caller":"traceutil/trace.go:172","msg":"trace[997775731] transaction","detail":"{read_only:false; response_revision:1159; number_of_response:1; }","duration":"158.906166ms","start":"2025-11-19T01:58:15.790877Z","end":"2025-11-19T01:58:15.949783Z","steps":["trace[997775731] 'process raft request'  (duration: 101.777171ms)","trace[997775731] 'compare'  (duration: 57.038639ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-19T01:58:15.972701Z","caller":"traceutil/trace.go:172","msg":"trace[1121739427] transaction","detail":"{read_only:false; response_revision:1160; number_of_response:1; }","duration":"150.229829ms","start":"2025-11-19T01:58:15.822455Z","end":"2025-11-19T01:58:15.972685Z","steps":["trace[1121739427] 'process raft request'  (duration: 150.069565ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T01:58:17.649810Z","caller":"traceutil/trace.go:172","msg":"trace[1376741424] transaction","detail":"{read_only:false; response_revision:1167; number_of_response:1; }","duration":"134.665984ms","start":"2025-11-19T01:58:17.515123Z","end":"2025-11-19T01:58:17.649789Z","steps":["trace[1376741424] 'process raft request'  (duration: 71.983384ms)","trace[1376741424] 'compare'  (duration: 62.468388ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-19T01:58:17.936690Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"126.467719ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/resourceclaimtemplates\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-19T01:58:17.936771Z","caller":"traceutil/trace.go:172","msg":"trace[1102516106] range","detail":"{range_begin:/registry/resourceclaimtemplates; range_end:; response_count:0; response_revision:1167; }","duration":"126.563777ms","start":"2025-11-19T01:58:17.810190Z","end":"2025-11-19T01:58:17.936754Z","steps":["trace[1102516106] 'range keys from in-memory index tree'  (duration: 126.405737ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-19T01:58:20.825564Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"158.076484ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128041412979033635 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.49.2\" mod_revision:1092 > success:<request_put:<key:\"/registry/masterleases/192.168.49.2\" value_size:65 lease:8128041412979033631 >> failure:<request_range:<key:\"/registry/masterleases/192.168.49.2\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-19T01:58:20.825661Z","caller":"traceutil/trace.go:172","msg":"trace[477823980] transaction","detail":"{read_only:false; response_revision:1187; number_of_response:1; }","duration":"163.341724ms","start":"2025-11-19T01:58:20.662300Z","end":"2025-11-19T01:58:20.825642Z","steps":["trace[477823980] 'compare'  (duration: 157.986528ms)"],"step_count":1}
	
	
	==> gcp-auth [559ecf78b7f29981481dbedebcc741a683d8c35abcce59cef31007660aa35951] <==
	2025/11/19 01:58:23 GCP Auth Webhook started!
	2025/11/19 01:58:28 Ready to marshal response ...
	2025/11/19 01:58:28 Ready to write response ...
	2025/11/19 01:58:28 Ready to marshal response ...
	2025/11/19 01:58:28 Ready to write response ...
	2025/11/19 01:58:28 Ready to marshal response ...
	2025/11/19 01:58:28 Ready to write response ...
	
	
	==> kernel <==
	 01:58:38 up 41 min,  0 user,  load average: 2.37, 1.01, 0.39
	Linux addons-167289 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [fc26589821a5be523a8e87f633779e2f88a1591ee75b009035709790a1af3b55] <==
	I1119 01:57:08.175602       1 main.go:148] setting mtu 1500 for CNI 
	I1119 01:57:08.175656       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 01:57:08.175699       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T01:57:08Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 01:57:08.472851       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 01:57:08.472880       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 01:57:08.472890       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 01:57:08.472990       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1119 01:57:38.473664       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1119 01:57:38.473672       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1119 01:57:38.473681       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1119 01:57:38.473663       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1119 01:57:39.873810       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1119 01:57:39.873830       1 metrics.go:72] Registering metrics
	I1119 01:57:39.873871       1 controller.go:711] "Syncing nftables rules"
	I1119 01:57:48.480466       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 01:57:48.480516       1 main.go:301] handling current node
	I1119 01:57:58.472149       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 01:57:58.472181       1 main.go:301] handling current node
	I1119 01:58:08.472632       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 01:58:08.472666       1 main.go:301] handling current node
	I1119 01:58:18.472501       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 01:58:18.472537       1 main.go:301] handling current node
	I1119 01:58:28.472621       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 01:58:28.472646       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2c19d6084be53e03b3e3ac1e879db7b855bd9d15968238b086b01debd7d271ef] <==
	I1119 01:57:15.408842       1 alloc.go:328] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.110.85.139"}
	W1119 01:57:36.112329       1 logging.go:55] [core] [Channel #267 SubChannel #268]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1119 01:57:36.119201       1 logging.go:55] [core] [Channel #271 SubChannel #272]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1119 01:57:36.135693       1 logging.go:55] [core] [Channel #275 SubChannel #276]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1119 01:57:36.141729       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1119 01:57:48.899301       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.110.85.139:443: connect: connection refused
	E1119 01:57:48.899342       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.110.85.139:443: connect: connection refused" logger="UnhandledError"
	W1119 01:57:48.899525       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.110.85.139:443: connect: connection refused
	E1119 01:57:48.899560       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.110.85.139:443: connect: connection refused" logger="UnhandledError"
	W1119 01:57:48.915201       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.110.85.139:443: connect: connection refused
	E1119 01:57:48.915235       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.110.85.139:443: connect: connection refused" logger="UnhandledError"
	W1119 01:57:48.922256       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.110.85.139:443: connect: connection refused
	E1119 01:57:48.922375       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.110.85.139:443: connect: connection refused" logger="UnhandledError"
	W1119 01:58:01.718969       1 handler_proxy.go:99] no RequestInfo found in the context
	E1119 01:58:01.719049       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1119 01:58:01.719586       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.226.170:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.226.170:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.226.170:443: connect: connection refused" logger="UnhandledError"
	E1119 01:58:01.721225       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.226.170:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.226.170:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.226.170:443: connect: connection refused" logger="UnhandledError"
	E1119 01:58:01.726641       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.226.170:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.226.170:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.226.170:443: connect: connection refused" logger="UnhandledError"
	E1119 01:58:01.747807       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.226.170:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.226.170:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.226.170:443: connect: connection refused" logger="UnhandledError"
	I1119 01:58:01.822693       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1119 01:58:36.406722       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:41016: use of closed network connection
	E1119 01:58:36.547761       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:41042: use of closed network connection
	
	
	==> kube-controller-manager [caf07801af8b3aabb5593451ab4fa658a97ce71604a86b5c1c38b3f9bde7e597] <==
	I1119 01:57:06.090804       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1119 01:57:06.090834       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1119 01:57:06.090866       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1119 01:57:06.090944       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1119 01:57:06.090948       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1119 01:57:06.090953       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1119 01:57:06.091139       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="addons-167289"
	I1119 01:57:06.091192       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1119 01:57:06.091232       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1119 01:57:06.093197       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1119 01:57:06.093278       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1119 01:57:06.095301       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 01:57:06.098926       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1119 01:57:06.101158       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 01:57:06.104392       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1119 01:57:06.108571       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1119 01:57:06.118211       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1119 01:57:36.105535       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1119 01:57:36.105689       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1119 01:57:36.105751       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1119 01:57:36.126807       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1119 01:57:36.130049       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1119 01:57:36.206016       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 01:57:36.230463       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 01:57:51.096690       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [76265018a97b054599e84272f03b9d9c1514d776b5f51869b14493d4baae8651] <==
	I1119 01:57:08.098911       1 server_linux.go:53] "Using iptables proxy"
	I1119 01:57:08.315948       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 01:57:08.417144       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 01:57:08.417261       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1119 01:57:08.417370       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 01:57:08.461402       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 01:57:08.461484       1 server_linux.go:132] "Using iptables Proxier"
	I1119 01:57:08.468358       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 01:57:08.477256       1 server.go:527] "Version info" version="v1.34.1"
	I1119 01:57:08.477287       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 01:57:08.478658       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 01:57:08.480767       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 01:57:08.478873       1 config.go:200] "Starting service config controller"
	I1119 01:57:08.480885       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 01:57:08.478892       1 config.go:309] "Starting node config controller"
	I1119 01:57:08.481007       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 01:57:08.481023       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 01:57:08.479448       1 config.go:106] "Starting endpoint slice config controller"
	I1119 01:57:08.481061       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 01:57:08.581045       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1119 01:57:08.581178       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1119 01:57:08.582223       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [32f9d499f63a2a4e0bec492e9ab5fd077b296adbf502d37c0814b310de798245] <==
	E1119 01:56:59.106708       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1119 01:56:59.106853       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1119 01:56:59.106868       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1119 01:56:59.106927       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1119 01:56:59.107042       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1119 01:56:59.107044       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1119 01:56:59.107307       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1119 01:56:59.107327       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1119 01:56:59.107367       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1119 01:56:59.107375       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1119 01:56:59.107642       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1119 01:56:59.107672       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1119 01:56:59.107693       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1119 01:56:59.107769       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1119 01:56:59.107885       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1119 01:56:59.107965       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1119 01:56:59.920998       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1119 01:56:59.961106       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1119 01:56:59.970002       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1119 01:57:00.146831       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1119 01:57:00.258122       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1119 01:57:00.287179       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1119 01:57:00.287895       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1119 01:57:00.375846       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1119 01:57:03.505076       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 19 01:58:14 addons-167289 kubelet[1293]: I1119 01:58:14.046370    1293 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zs22h\" (UniqueName: \"kubernetes.io/projected/de1c222f-b966-4148-a483-3ffb7fdbce6a-kube-api-access-zs22h\") on node \"addons-167289\" DevicePath \"\""
	Nov 19 01:58:14 addons-167289 kubelet[1293]: I1119 01:58:14.046412    1293 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6kxfx\" (UniqueName: \"kubernetes.io/projected/f7e4382b-dfc5-4262-b741-88fad99326e4-kube-api-access-6kxfx\") on node \"addons-167289\" DevicePath \"\""
	Nov 19 01:58:14 addons-167289 kubelet[1293]: I1119 01:58:14.540477    1293 scope.go:117] "RemoveContainer" containerID="169f26f925499057d0e1437c130ecf8e342daeb901f68e9c97d79e9faed86a76"
	Nov 19 01:58:14 addons-167289 kubelet[1293]: I1119 01:58:14.753793    1293 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6461a47f85ea4596e0d7f4a13d81e4da038ed69054116988f4d7176c4746022"
	Nov 19 01:58:14 addons-167289 kubelet[1293]: I1119 01:58:14.756040    1293 scope.go:117] "RemoveContainer" containerID="169f26f925499057d0e1437c130ecf8e342daeb901f68e9c97d79e9faed86a76"
	Nov 19 01:58:14 addons-167289 kubelet[1293]: I1119 01:58:14.757952    1293 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-7s98h" secret="" err="secret \"gcp-auth\" not found"
	Nov 19 01:58:14 addons-167289 kubelet[1293]: I1119 01:58:14.760257    1293 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cebe50520f6924f70e7fd34714f1a20dd480aa5663404354efecc0152e38b865"
	Nov 19 01:58:14 addons-167289 kubelet[1293]: I1119 01:58:14.760561    1293 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-cmmr7" secret="" err="secret \"gcp-auth\" not found"
	Nov 19 01:58:14 addons-167289 kubelet[1293]: I1119 01:58:14.788752    1293 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-proxy-7s98h" podStartSLOduration=1.6905609780000002 podStartE2EDuration="26.788733996s" podCreationTimestamp="2025-11-19 01:57:48 +0000 UTC" firstStartedPulling="2025-11-19 01:57:49.406878928 +0000 UTC m=+47.943633329" lastFinishedPulling="2025-11-19 01:58:14.505051933 +0000 UTC m=+73.041806347" observedRunningTime="2025-11-19 01:58:14.788072029 +0000 UTC m=+73.324826451" watchObservedRunningTime="2025-11-19 01:58:14.788733996 +0000 UTC m=+73.325488418"
	Nov 19 01:58:15 addons-167289 kubelet[1293]: I1119 01:58:15.764478    1293 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-7s98h" secret="" err="secret \"gcp-auth\" not found"
	Nov 19 01:58:16 addons-167289 kubelet[1293]: I1119 01:58:16.567422    1293 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrpk4\" (UniqueName: \"kubernetes.io/projected/56f9dd3d-503e-4e6d-af0a-ca2f3bba4b0e-kube-api-access-rrpk4\") pod \"56f9dd3d-503e-4e6d-af0a-ca2f3bba4b0e\" (UID: \"56f9dd3d-503e-4e6d-af0a-ca2f3bba4b0e\") "
	Nov 19 01:58:16 addons-167289 kubelet[1293]: I1119 01:58:16.570045    1293 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56f9dd3d-503e-4e6d-af0a-ca2f3bba4b0e-kube-api-access-rrpk4" (OuterVolumeSpecName: "kube-api-access-rrpk4") pod "56f9dd3d-503e-4e6d-af0a-ca2f3bba4b0e" (UID: "56f9dd3d-503e-4e6d-af0a-ca2f3bba4b0e"). InnerVolumeSpecName "kube-api-access-rrpk4". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Nov 19 01:58:16 addons-167289 kubelet[1293]: I1119 01:58:16.668923    1293 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rrpk4\" (UniqueName: \"kubernetes.io/projected/56f9dd3d-503e-4e6d-af0a-ca2f3bba4b0e-kube-api-access-rrpk4\") on node \"addons-167289\" DevicePath \"\""
	Nov 19 01:58:16 addons-167289 kubelet[1293]: I1119 01:58:16.769363    1293 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="121072764b6b6db6ebe6b3446f674807f70ec5947e6534342aa90c7d329b9a2f"
	Nov 19 01:58:18 addons-167289 kubelet[1293]: I1119 01:58:18.790173    1293 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-6c8bf45fb-89mcj" podStartSLOduration=57.654792929 podStartE2EDuration="1m10.790156903s" podCreationTimestamp="2025-11-19 01:57:08 +0000 UTC" firstStartedPulling="2025-11-19 01:58:04.876550346 +0000 UTC m=+63.413304768" lastFinishedPulling="2025-11-19 01:58:18.011914333 +0000 UTC m=+76.548668742" observedRunningTime="2025-11-19 01:58:18.789656106 +0000 UTC m=+77.326410528" watchObservedRunningTime="2025-11-19 01:58:18.790156903 +0000 UTC m=+77.326911324"
	Nov 19 01:58:20 addons-167289 kubelet[1293]: E1119 01:58:20.801303    1293 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Nov 19 01:58:20 addons-167289 kubelet[1293]: E1119 01:58:20.801379    1293 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4dc568dd-9122-493f-a53a-1829913774ed-gcr-creds podName:4dc568dd-9122-493f-a53a-1829913774ed nodeName:}" failed. No retries permitted until 2025-11-19 01:58:52.801364416 +0000 UTC m=+111.338118817 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/4dc568dd-9122-493f-a53a-1829913774ed-gcr-creds") pod "registry-creds-764b6fb674-85l2k" (UID: "4dc568dd-9122-493f-a53a-1829913774ed") : secret "registry-creds-gcr" not found
	Nov 19 01:58:21 addons-167289 kubelet[1293]: I1119 01:58:21.803526    1293 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-qm258" podStartSLOduration=64.99913835 podStartE2EDuration="1m13.803509957s" podCreationTimestamp="2025-11-19 01:57:08 +0000 UTC" firstStartedPulling="2025-11-19 01:58:12.459688166 +0000 UTC m=+70.996442566" lastFinishedPulling="2025-11-19 01:58:21.264059761 +0000 UTC m=+79.800814173" observedRunningTime="2025-11-19 01:58:21.803488022 +0000 UTC m=+80.340242444" watchObservedRunningTime="2025-11-19 01:58:21.803509957 +0000 UTC m=+80.340264377"
	Nov 19 01:58:22 addons-167289 kubelet[1293]: I1119 01:58:22.588743    1293 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Nov 19 01:58:22 addons-167289 kubelet[1293]: I1119 01:58:22.588787    1293 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Nov 19 01:58:23 addons-167289 kubelet[1293]: I1119 01:58:23.812668    1293 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-6lrls" podStartSLOduration=66.966772666 podStartE2EDuration="1m8.812649891s" podCreationTimestamp="2025-11-19 01:57:15 +0000 UTC" firstStartedPulling="2025-11-19 01:58:21.251975553 +0000 UTC m=+79.788729972" lastFinishedPulling="2025-11-19 01:58:23.097852793 +0000 UTC m=+81.634607197" observedRunningTime="2025-11-19 01:58:23.811863167 +0000 UTC m=+82.348617589" watchObservedRunningTime="2025-11-19 01:58:23.812649891 +0000 UTC m=+82.349404320"
	Nov 19 01:58:25 addons-167289 kubelet[1293]: I1119 01:58:25.832320    1293 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-m4svl" podStartSLOduration=1.717871389 podStartE2EDuration="37.832301927s" podCreationTimestamp="2025-11-19 01:57:48 +0000 UTC" firstStartedPulling="2025-11-19 01:57:49.319552264 +0000 UTC m=+47.856306665" lastFinishedPulling="2025-11-19 01:58:25.433982792 +0000 UTC m=+83.970737203" observedRunningTime="2025-11-19 01:58:25.831144859 +0000 UTC m=+84.367899281" watchObservedRunningTime="2025-11-19 01:58:25.832301927 +0000 UTC m=+84.369056348"
	Nov 19 01:58:28 addons-167289 kubelet[1293]: I1119 01:58:28.357532    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlgqh\" (UniqueName: \"kubernetes.io/projected/d4b4f227-0052-445f-a84d-de63013a9d7f-kube-api-access-vlgqh\") pod \"busybox\" (UID: \"d4b4f227-0052-445f-a84d-de63013a9d7f\") " pod="default/busybox"
	Nov 19 01:58:28 addons-167289 kubelet[1293]: I1119 01:58:28.357625    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/d4b4f227-0052-445f-a84d-de63013a9d7f-gcp-creds\") pod \"busybox\" (UID: \"d4b4f227-0052-445f-a84d-de63013a9d7f\") " pod="default/busybox"
	Nov 19 01:58:29 addons-167289 kubelet[1293]: I1119 01:58:29.843111    1293 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.249683533 podStartE2EDuration="1.843092748s" podCreationTimestamp="2025-11-19 01:58:28 +0000 UTC" firstStartedPulling="2025-11-19 01:58:28.599636516 +0000 UTC m=+87.136390930" lastFinishedPulling="2025-11-19 01:58:29.193045723 +0000 UTC m=+87.729800145" observedRunningTime="2025-11-19 01:58:29.842232355 +0000 UTC m=+88.378986778" watchObservedRunningTime="2025-11-19 01:58:29.843092748 +0000 UTC m=+88.379847171"
	
	
	==> storage-provisioner [2d72765f224ed9ae16aaaf83613ea7845ee7fcd72d8bd4046856b1ae4dcbe2f6] <==
	W1119 01:58:13.501565       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 01:58:15.505156       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 01:58:15.509688       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 01:58:17.512782       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 01:58:17.651108       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 01:58:19.654250       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 01:58:19.658402       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 01:58:21.661791       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 01:58:21.665832       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 01:58:23.668603       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 01:58:23.672825       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 01:58:25.675779       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 01:58:25.679379       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 01:58:27.681520       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 01:58:27.684693       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 01:58:29.687491       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 01:58:29.690757       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 01:58:31.693100       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 01:58:31.698139       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 01:58:33.700498       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 01:58:33.704100       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 01:58:35.706896       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 01:58:35.711399       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 01:58:37.713890       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 01:58:37.717756       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-167289 -n addons-167289
helpers_test.go:269: (dbg) Run:  kubectl --context addons-167289 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: gcp-auth-certs-create-xzwbz gcp-auth-certs-patch-hcqrv ingress-nginx-admission-create-7868s ingress-nginx-admission-patch-mq2v2 registry-creds-764b6fb674-85l2k
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-167289 describe pod gcp-auth-certs-create-xzwbz gcp-auth-certs-patch-hcqrv ingress-nginx-admission-create-7868s ingress-nginx-admission-patch-mq2v2 registry-creds-764b6fb674-85l2k
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-167289 describe pod gcp-auth-certs-create-xzwbz gcp-auth-certs-patch-hcqrv ingress-nginx-admission-create-7868s ingress-nginx-admission-patch-mq2v2 registry-creds-764b6fb674-85l2k: exit status 1 (57.095182ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "gcp-auth-certs-create-xzwbz" not found
	Error from server (NotFound): pods "gcp-auth-certs-patch-hcqrv" not found
	Error from server (NotFound): pods "ingress-nginx-admission-create-7868s" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-mq2v2" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-85l2k" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-167289 describe pod gcp-auth-certs-create-xzwbz gcp-auth-certs-patch-hcqrv ingress-nginx-admission-create-7868s ingress-nginx-admission-patch-mq2v2 registry-creds-764b6fb674-85l2k: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-167289 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-167289 addons disable headlamp --alsologtostderr -v=1: exit status 11 (228.322711ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 01:58:38.967557   24990 out.go:360] Setting OutFile to fd 1 ...
	I1119 01:58:38.967732   24990 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 01:58:38.967743   24990 out.go:374] Setting ErrFile to fd 2...
	I1119 01:58:38.967750   24990 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 01:58:38.967936   24990 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-11126/.minikube/bin
	I1119 01:58:38.968208   24990 mustload.go:66] Loading cluster: addons-167289
	I1119 01:58:38.968552   24990 config.go:182] Loaded profile config "addons-167289": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 01:58:38.968570   24990 addons.go:607] checking whether the cluster is paused
	I1119 01:58:38.968674   24990 config.go:182] Loaded profile config "addons-167289": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 01:58:38.968689   24990 host.go:66] Checking if "addons-167289" exists ...
	I1119 01:58:38.969049   24990 cli_runner.go:164] Run: docker container inspect addons-167289 --format={{.State.Status}}
	I1119 01:58:38.986136   24990 ssh_runner.go:195] Run: systemctl --version
	I1119 01:58:38.986181   24990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-167289
	I1119 01:58:39.001799   24990 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/addons-167289/id_rsa Username:docker}
	I1119 01:58:39.093340   24990 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 01:58:39.093417   24990 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 01:58:39.120764   24990 cri.go:89] found id: "2b3c875b37c34a1aaf8e79a105e8e53fae42de67a4a4a839e6099e2e76e3ee93"
	I1119 01:58:39.120787   24990 cri.go:89] found id: "0057cb6b6d59c2d741aceb29df5771b62c0e82207a84f69ac5154387cbd84153"
	I1119 01:58:39.120792   24990 cri.go:89] found id: "387526d34b521aa97915fbe3e7854312807b05167ee255ed3d4dfbf358eb18ab"
	I1119 01:58:39.120796   24990 cri.go:89] found id: "f44d066a2880c3b89fb901e65c68edf6462e6f5ee4704d445d70bab540e140db"
	I1119 01:58:39.120799   24990 cri.go:89] found id: "d46baa577b02a9113e070c4a0480941b3b25bbbcce455137088c83a4b640d69f"
	I1119 01:58:39.120802   24990 cri.go:89] found id: "3e7307111a0a7ff2319df0e4a44e2dfdd6899963934cd8f81e97fe79104558fe"
	I1119 01:58:39.120805   24990 cri.go:89] found id: "e4525045db437311150f979f145e5df2b15dba4a85832f3b40b56d9e95456c85"
	I1119 01:58:39.120808   24990 cri.go:89] found id: "320316320c36a31575ed518280c787f454599b6f6db11a50abd8a2b071eab8ce"
	I1119 01:58:39.120812   24990 cri.go:89] found id: "77230f6072332b89f67e0a13fc3e2f90a73b685df581bca576a4aa98a0393837"
	I1119 01:58:39.120827   24990 cri.go:89] found id: "4c4521da22d2eb06ed45356e3e80a96ea0146646cd996eb249b4381da1a14456"
	I1119 01:58:39.120831   24990 cri.go:89] found id: "6c5d7a569a83aee258230f3e4101efcec68212fb81bd79541a6db05f42d1a635"
	I1119 01:58:39.120835   24990 cri.go:89] found id: "c45598982d3b30077574919aa2f884686b6cc7cef2866a9077b7aaa5b63ec66f"
	I1119 01:58:39.120844   24990 cri.go:89] found id: "fc07b5bfc14386b4ffa6dbdfb46e833fb2891243713de31478929edea09648dc"
	I1119 01:58:39.120849   24990 cri.go:89] found id: "ee1592f353982b5c192b5c5fede23bebda0067235ac78605adf1748bd5b7a544"
	I1119 01:58:39.120863   24990 cri.go:89] found id: "139e05f21703a92685b5f507816a8e38f914726f6ef0aa1b6cace7a7821c19fa"
	I1119 01:58:39.120874   24990 cri.go:89] found id: "28a0d1d0eb9de3e99faff9a60e034a22f2550e1d457b6e8c119f0069bb8c2dfb"
	I1119 01:58:39.120881   24990 cri.go:89] found id: "2d72765f224ed9ae16aaaf83613ea7845ee7fcd72d8bd4046856b1ae4dcbe2f6"
	I1119 01:58:39.120887   24990 cri.go:89] found id: "4f2a8fdefa3a921e57d39ce31bcba0663aef48e6cc812fddf8202cddb72408bc"
	I1119 01:58:39.120891   24990 cri.go:89] found id: "fc26589821a5be523a8e87f633779e2f88a1591ee75b009035709790a1af3b55"
	I1119 01:58:39.120895   24990 cri.go:89] found id: "76265018a97b054599e84272f03b9d9c1514d776b5f51869b14493d4baae8651"
	I1119 01:58:39.120900   24990 cri.go:89] found id: "2c19d6084be53e03b3e3ac1e879db7b855bd9d15968238b086b01debd7d271ef"
	I1119 01:58:39.120903   24990 cri.go:89] found id: "caf07801af8b3aabb5593451ab4fa658a97ce71604a86b5c1c38b3f9bde7e597"
	I1119 01:58:39.120905   24990 cri.go:89] found id: "32f9d499f63a2a4e0bec492e9ab5fd077b296adbf502d37c0814b310de798245"
	I1119 01:58:39.120913   24990 cri.go:89] found id: "c9553756abec4455cfb96c8ccb4ac24ebed143c24ffb9c08df7706240e482bac"
	I1119 01:58:39.120917   24990 cri.go:89] found id: ""
	I1119 01:58:39.120971   24990 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 01:58:39.134192   24990 out.go:203] 
	W1119 01:58:39.135599   24990 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T01:58:39Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T01:58:39Z" level=error msg="open /run/runc: no such file or directory"
	
	W1119 01:58:39.135623   24990 out.go:285] * 
	* 
	W1119 01:58:39.138566   24990 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1119 01:58:39.139785   24990 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-amd64 -p addons-167289 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (2.36s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-6f9fcf858b-2s48m" [7a102488-ad10-44e2-8a40-6d15026cf912] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.002298832s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-167289 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-167289 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (249.768627ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 01:58:54.396248   26923 out.go:360] Setting OutFile to fd 1 ...
	I1119 01:58:54.396501   26923 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 01:58:54.396511   26923 out.go:374] Setting ErrFile to fd 2...
	I1119 01:58:54.396515   26923 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 01:58:54.396704   26923 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-11126/.minikube/bin
	I1119 01:58:54.396947   26923 mustload.go:66] Loading cluster: addons-167289
	I1119 01:58:54.397277   26923 config.go:182] Loaded profile config "addons-167289": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 01:58:54.397294   26923 addons.go:607] checking whether the cluster is paused
	I1119 01:58:54.397379   26923 config.go:182] Loaded profile config "addons-167289": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 01:58:54.397391   26923 host.go:66] Checking if "addons-167289" exists ...
	I1119 01:58:54.397819   26923 cli_runner.go:164] Run: docker container inspect addons-167289 --format={{.State.Status}}
	I1119 01:58:54.416403   26923 ssh_runner.go:195] Run: systemctl --version
	I1119 01:58:54.416468   26923 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-167289
	I1119 01:58:54.434923   26923 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/addons-167289/id_rsa Username:docker}
	I1119 01:58:54.527193   26923 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 01:58:54.527271   26923 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 01:58:54.559319   26923 cri.go:89] found id: "6bb674f78899cf132b694502d156929f80addc5f2e093e36d38f505f43b4e6ed"
	I1119 01:58:54.559342   26923 cri.go:89] found id: "2b3c875b37c34a1aaf8e79a105e8e53fae42de67a4a4a839e6099e2e76e3ee93"
	I1119 01:58:54.559348   26923 cri.go:89] found id: "0057cb6b6d59c2d741aceb29df5771b62c0e82207a84f69ac5154387cbd84153"
	I1119 01:58:54.559353   26923 cri.go:89] found id: "387526d34b521aa97915fbe3e7854312807b05167ee255ed3d4dfbf358eb18ab"
	I1119 01:58:54.559356   26923 cri.go:89] found id: "f44d066a2880c3b89fb901e65c68edf6462e6f5ee4704d445d70bab540e140db"
	I1119 01:58:54.559361   26923 cri.go:89] found id: "d46baa577b02a9113e070c4a0480941b3b25bbbcce455137088c83a4b640d69f"
	I1119 01:58:54.559366   26923 cri.go:89] found id: "3e7307111a0a7ff2319df0e4a44e2dfdd6899963934cd8f81e97fe79104558fe"
	I1119 01:58:54.559370   26923 cri.go:89] found id: "e4525045db437311150f979f145e5df2b15dba4a85832f3b40b56d9e95456c85"
	I1119 01:58:54.559375   26923 cri.go:89] found id: "320316320c36a31575ed518280c787f454599b6f6db11a50abd8a2b071eab8ce"
	I1119 01:58:54.559382   26923 cri.go:89] found id: "77230f6072332b89f67e0a13fc3e2f90a73b685df581bca576a4aa98a0393837"
	I1119 01:58:54.559387   26923 cri.go:89] found id: "4c4521da22d2eb06ed45356e3e80a96ea0146646cd996eb249b4381da1a14456"
	I1119 01:58:54.559391   26923 cri.go:89] found id: "6c5d7a569a83aee258230f3e4101efcec68212fb81bd79541a6db05f42d1a635"
	I1119 01:58:54.559396   26923 cri.go:89] found id: "c45598982d3b30077574919aa2f884686b6cc7cef2866a9077b7aaa5b63ec66f"
	I1119 01:58:54.559406   26923 cri.go:89] found id: "fc07b5bfc14386b4ffa6dbdfb46e833fb2891243713de31478929edea09648dc"
	I1119 01:58:54.559414   26923 cri.go:89] found id: "ee1592f353982b5c192b5c5fede23bebda0067235ac78605adf1748bd5b7a544"
	I1119 01:58:54.559452   26923 cri.go:89] found id: "139e05f21703a92685b5f507816a8e38f914726f6ef0aa1b6cace7a7821c19fa"
	I1119 01:58:54.559461   26923 cri.go:89] found id: "28a0d1d0eb9de3e99faff9a60e034a22f2550e1d457b6e8c119f0069bb8c2dfb"
	I1119 01:58:54.559466   26923 cri.go:89] found id: "2d72765f224ed9ae16aaaf83613ea7845ee7fcd72d8bd4046856b1ae4dcbe2f6"
	I1119 01:58:54.559470   26923 cri.go:89] found id: "4f2a8fdefa3a921e57d39ce31bcba0663aef48e6cc812fddf8202cddb72408bc"
	I1119 01:58:54.559474   26923 cri.go:89] found id: "fc26589821a5be523a8e87f633779e2f88a1591ee75b009035709790a1af3b55"
	I1119 01:58:54.559478   26923 cri.go:89] found id: "76265018a97b054599e84272f03b9d9c1514d776b5f51869b14493d4baae8651"
	I1119 01:58:54.559482   26923 cri.go:89] found id: "2c19d6084be53e03b3e3ac1e879db7b855bd9d15968238b086b01debd7d271ef"
	I1119 01:58:54.559485   26923 cri.go:89] found id: "caf07801af8b3aabb5593451ab4fa658a97ce71604a86b5c1c38b3f9bde7e597"
	I1119 01:58:54.559490   26923 cri.go:89] found id: "32f9d499f63a2a4e0bec492e9ab5fd077b296adbf502d37c0814b310de798245"
	I1119 01:58:54.559494   26923 cri.go:89] found id: "c9553756abec4455cfb96c8ccb4ac24ebed143c24ffb9c08df7706240e482bac"
	I1119 01:58:54.559498   26923 cri.go:89] found id: ""
	I1119 01:58:54.559540   26923 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 01:58:54.574347   26923 out.go:203] 
	W1119 01:58:54.575504   26923 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T01:58:54Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T01:58:54Z" level=error msg="open /run/runc: no such file or directory"
	
	W1119 01:58:54.575526   26923 out.go:285] * 
	* 
	W1119 01:58:54.579348   26923 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1119 01:58:54.580472   26923 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-amd64 -p addons-167289 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.26s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.1s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-167289 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-167289 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-167289 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-167289 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-167289 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-167289 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-167289 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [135a4a73-4dbf-4fd7-8a7e-da3461f7d6c7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [135a4a73-4dbf-4fd7-8a7e-da3461f7d6c7] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [135a4a73-4dbf-4fd7-8a7e-da3461f7d6c7] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.002937951s
addons_test.go:967: (dbg) Run:  kubectl --context addons-167289 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-167289 ssh "cat /opt/local-path-provisioner/pvc-18f6647e-e829-4ddb-8ec4-c1f78ee38e49_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-167289 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-167289 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-167289 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-167289 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (232.348005ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 01:59:01.953763   27426 out.go:360] Setting OutFile to fd 1 ...
	I1119 01:59:01.953898   27426 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 01:59:01.953907   27426 out.go:374] Setting ErrFile to fd 2...
	I1119 01:59:01.953911   27426 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 01:59:01.954101   27426 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-11126/.minikube/bin
	I1119 01:59:01.954340   27426 mustload.go:66] Loading cluster: addons-167289
	I1119 01:59:01.954701   27426 config.go:182] Loaded profile config "addons-167289": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 01:59:01.954715   27426 addons.go:607] checking whether the cluster is paused
	I1119 01:59:01.954802   27426 config.go:182] Loaded profile config "addons-167289": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 01:59:01.954813   27426 host.go:66] Checking if "addons-167289" exists ...
	I1119 01:59:01.955131   27426 cli_runner.go:164] Run: docker container inspect addons-167289 --format={{.State.Status}}
	I1119 01:59:01.973060   27426 ssh_runner.go:195] Run: systemctl --version
	I1119 01:59:01.973117   27426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-167289
	I1119 01:59:01.990899   27426 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/addons-167289/id_rsa Username:docker}
	I1119 01:59:02.083395   27426 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 01:59:02.083485   27426 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 01:59:02.111672   27426 cri.go:89] found id: "6bb674f78899cf132b694502d156929f80addc5f2e093e36d38f505f43b4e6ed"
	I1119 01:59:02.111690   27426 cri.go:89] found id: "2b3c875b37c34a1aaf8e79a105e8e53fae42de67a4a4a839e6099e2e76e3ee93"
	I1119 01:59:02.111694   27426 cri.go:89] found id: "0057cb6b6d59c2d741aceb29df5771b62c0e82207a84f69ac5154387cbd84153"
	I1119 01:59:02.111697   27426 cri.go:89] found id: "387526d34b521aa97915fbe3e7854312807b05167ee255ed3d4dfbf358eb18ab"
	I1119 01:59:02.111700   27426 cri.go:89] found id: "f44d066a2880c3b89fb901e65c68edf6462e6f5ee4704d445d70bab540e140db"
	I1119 01:59:02.111703   27426 cri.go:89] found id: "d46baa577b02a9113e070c4a0480941b3b25bbbcce455137088c83a4b640d69f"
	I1119 01:59:02.111705   27426 cri.go:89] found id: "3e7307111a0a7ff2319df0e4a44e2dfdd6899963934cd8f81e97fe79104558fe"
	I1119 01:59:02.111708   27426 cri.go:89] found id: "e4525045db437311150f979f145e5df2b15dba4a85832f3b40b56d9e95456c85"
	I1119 01:59:02.111711   27426 cri.go:89] found id: "320316320c36a31575ed518280c787f454599b6f6db11a50abd8a2b071eab8ce"
	I1119 01:59:02.111716   27426 cri.go:89] found id: "77230f6072332b89f67e0a13fc3e2f90a73b685df581bca576a4aa98a0393837"
	I1119 01:59:02.111720   27426 cri.go:89] found id: "4c4521da22d2eb06ed45356e3e80a96ea0146646cd996eb249b4381da1a14456"
	I1119 01:59:02.111724   27426 cri.go:89] found id: "6c5d7a569a83aee258230f3e4101efcec68212fb81bd79541a6db05f42d1a635"
	I1119 01:59:02.111728   27426 cri.go:89] found id: "c45598982d3b30077574919aa2f884686b6cc7cef2866a9077b7aaa5b63ec66f"
	I1119 01:59:02.111736   27426 cri.go:89] found id: "fc07b5bfc14386b4ffa6dbdfb46e833fb2891243713de31478929edea09648dc"
	I1119 01:59:02.111741   27426 cri.go:89] found id: "ee1592f353982b5c192b5c5fede23bebda0067235ac78605adf1748bd5b7a544"
	I1119 01:59:02.111756   27426 cri.go:89] found id: "139e05f21703a92685b5f507816a8e38f914726f6ef0aa1b6cace7a7821c19fa"
	I1119 01:59:02.111763   27426 cri.go:89] found id: "28a0d1d0eb9de3e99faff9a60e034a22f2550e1d457b6e8c119f0069bb8c2dfb"
	I1119 01:59:02.111767   27426 cri.go:89] found id: "2d72765f224ed9ae16aaaf83613ea7845ee7fcd72d8bd4046856b1ae4dcbe2f6"
	I1119 01:59:02.111770   27426 cri.go:89] found id: "4f2a8fdefa3a921e57d39ce31bcba0663aef48e6cc812fddf8202cddb72408bc"
	I1119 01:59:02.111772   27426 cri.go:89] found id: "fc26589821a5be523a8e87f633779e2f88a1591ee75b009035709790a1af3b55"
	I1119 01:59:02.111777   27426 cri.go:89] found id: "76265018a97b054599e84272f03b9d9c1514d776b5f51869b14493d4baae8651"
	I1119 01:59:02.111779   27426 cri.go:89] found id: "2c19d6084be53e03b3e3ac1e879db7b855bd9d15968238b086b01debd7d271ef"
	I1119 01:59:02.111782   27426 cri.go:89] found id: "caf07801af8b3aabb5593451ab4fa658a97ce71604a86b5c1c38b3f9bde7e597"
	I1119 01:59:02.111784   27426 cri.go:89] found id: "32f9d499f63a2a4e0bec492e9ab5fd077b296adbf502d37c0814b310de798245"
	I1119 01:59:02.111786   27426 cri.go:89] found id: "c9553756abec4455cfb96c8ccb4ac24ebed143c24ffb9c08df7706240e482bac"
	I1119 01:59:02.111788   27426 cri.go:89] found id: ""
	I1119 01:59:02.111826   27426 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 01:59:02.124658   27426 out.go:203] 
	W1119 01:59:02.125636   27426 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T01:59:02Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T01:59:02Z" level=error msg="open /run/runc: no such file or directory"
	
	W1119 01:59:02.125654   27426 out.go:285] * 
	* 
	W1119 01:59:02.128703   27426 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1119 01:59:02.130139   27426 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-amd64 -p addons-167289 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (8.10s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.24s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-sb8hx" [d9fdbfe8-df6b-4329-ba9d-8ce33b033a74] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.002726602s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-167289 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-167289 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (241.030115ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 01:58:41.846255   25053 out.go:360] Setting OutFile to fd 1 ...
	I1119 01:58:41.846408   25053 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 01:58:41.846419   25053 out.go:374] Setting ErrFile to fd 2...
	I1119 01:58:41.846423   25053 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 01:58:41.846702   25053 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-11126/.minikube/bin
	I1119 01:58:41.846980   25053 mustload.go:66] Loading cluster: addons-167289
	I1119 01:58:41.847299   25053 config.go:182] Loaded profile config "addons-167289": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 01:58:41.847314   25053 addons.go:607] checking whether the cluster is paused
	I1119 01:58:41.847411   25053 config.go:182] Loaded profile config "addons-167289": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 01:58:41.847426   25053 host.go:66] Checking if "addons-167289" exists ...
	I1119 01:58:41.847815   25053 cli_runner.go:164] Run: docker container inspect addons-167289 --format={{.State.Status}}
	I1119 01:58:41.865699   25053 ssh_runner.go:195] Run: systemctl --version
	I1119 01:58:41.865762   25053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-167289
	I1119 01:58:41.883779   25053 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/addons-167289/id_rsa Username:docker}
	I1119 01:58:41.978577   25053 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 01:58:41.978669   25053 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 01:58:42.006090   25053 cri.go:89] found id: "2b3c875b37c34a1aaf8e79a105e8e53fae42de67a4a4a839e6099e2e76e3ee93"
	I1119 01:58:42.006109   25053 cri.go:89] found id: "0057cb6b6d59c2d741aceb29df5771b62c0e82207a84f69ac5154387cbd84153"
	I1119 01:58:42.006113   25053 cri.go:89] found id: "387526d34b521aa97915fbe3e7854312807b05167ee255ed3d4dfbf358eb18ab"
	I1119 01:58:42.006118   25053 cri.go:89] found id: "f44d066a2880c3b89fb901e65c68edf6462e6f5ee4704d445d70bab540e140db"
	I1119 01:58:42.006121   25053 cri.go:89] found id: "d46baa577b02a9113e070c4a0480941b3b25bbbcce455137088c83a4b640d69f"
	I1119 01:58:42.006124   25053 cri.go:89] found id: "3e7307111a0a7ff2319df0e4a44e2dfdd6899963934cd8f81e97fe79104558fe"
	I1119 01:58:42.006127   25053 cri.go:89] found id: "e4525045db437311150f979f145e5df2b15dba4a85832f3b40b56d9e95456c85"
	I1119 01:58:42.006129   25053 cri.go:89] found id: "320316320c36a31575ed518280c787f454599b6f6db11a50abd8a2b071eab8ce"
	I1119 01:58:42.006132   25053 cri.go:89] found id: "77230f6072332b89f67e0a13fc3e2f90a73b685df581bca576a4aa98a0393837"
	I1119 01:58:42.006141   25053 cri.go:89] found id: "4c4521da22d2eb06ed45356e3e80a96ea0146646cd996eb249b4381da1a14456"
	I1119 01:58:42.006146   25053 cri.go:89] found id: "6c5d7a569a83aee258230f3e4101efcec68212fb81bd79541a6db05f42d1a635"
	I1119 01:58:42.006149   25053 cri.go:89] found id: "c45598982d3b30077574919aa2f884686b6cc7cef2866a9077b7aaa5b63ec66f"
	I1119 01:58:42.006151   25053 cri.go:89] found id: "fc07b5bfc14386b4ffa6dbdfb46e833fb2891243713de31478929edea09648dc"
	I1119 01:58:42.006154   25053 cri.go:89] found id: "ee1592f353982b5c192b5c5fede23bebda0067235ac78605adf1748bd5b7a544"
	I1119 01:58:42.006157   25053 cri.go:89] found id: "139e05f21703a92685b5f507816a8e38f914726f6ef0aa1b6cace7a7821c19fa"
	I1119 01:58:42.006174   25053 cri.go:89] found id: "28a0d1d0eb9de3e99faff9a60e034a22f2550e1d457b6e8c119f0069bb8c2dfb"
	I1119 01:58:42.006181   25053 cri.go:89] found id: "2d72765f224ed9ae16aaaf83613ea7845ee7fcd72d8bd4046856b1ae4dcbe2f6"
	I1119 01:58:42.006185   25053 cri.go:89] found id: "4f2a8fdefa3a921e57d39ce31bcba0663aef48e6cc812fddf8202cddb72408bc"
	I1119 01:58:42.006188   25053 cri.go:89] found id: "fc26589821a5be523a8e87f633779e2f88a1591ee75b009035709790a1af3b55"
	I1119 01:58:42.006190   25053 cri.go:89] found id: "76265018a97b054599e84272f03b9d9c1514d776b5f51869b14493d4baae8651"
	I1119 01:58:42.006195   25053 cri.go:89] found id: "2c19d6084be53e03b3e3ac1e879db7b855bd9d15968238b086b01debd7d271ef"
	I1119 01:58:42.006197   25053 cri.go:89] found id: "caf07801af8b3aabb5593451ab4fa658a97ce71604a86b5c1c38b3f9bde7e597"
	I1119 01:58:42.006199   25053 cri.go:89] found id: "32f9d499f63a2a4e0bec492e9ab5fd077b296adbf502d37c0814b310de798245"
	I1119 01:58:42.006202   25053 cri.go:89] found id: "c9553756abec4455cfb96c8ccb4ac24ebed143c24ffb9c08df7706240e482bac"
	I1119 01:58:42.006204   25053 cri.go:89] found id: ""
	I1119 01:58:42.006257   25053 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 01:58:42.019300   25053 out.go:203] 
	W1119 01:58:42.020556   25053 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T01:58:42Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T01:58:42Z" level=error msg="open /run/runc: no such file or directory"
	
	W1119 01:58:42.020574   25053 out.go:285] * 
	* 
	W1119 01:58:42.023704   25053 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1119 01:58:42.024895   25053 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-amd64 -p addons-167289 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (5.24s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.27s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-lfwjh" [b8c769b7-fd7c-4fff-a7fd-49ce32452ea2] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003544397s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-167289 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-167289 addons disable yakd --alsologtostderr -v=1: exit status 11 (269.670422ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 01:58:53.830520   26771 out.go:360] Setting OutFile to fd 1 ...
	I1119 01:58:53.830812   26771 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 01:58:53.830825   26771 out.go:374] Setting ErrFile to fd 2...
	I1119 01:58:53.830831   26771 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 01:58:53.831099   26771 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-11126/.minikube/bin
	I1119 01:58:53.831451   26771 mustload.go:66] Loading cluster: addons-167289
	I1119 01:58:53.831945   26771 config.go:182] Loaded profile config "addons-167289": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 01:58:53.831967   26771 addons.go:607] checking whether the cluster is paused
	I1119 01:58:53.832104   26771 config.go:182] Loaded profile config "addons-167289": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 01:58:53.832123   26771 host.go:66] Checking if "addons-167289" exists ...
	I1119 01:58:53.832634   26771 cli_runner.go:164] Run: docker container inspect addons-167289 --format={{.State.Status}}
	I1119 01:58:53.854903   26771 ssh_runner.go:195] Run: systemctl --version
	I1119 01:58:53.854965   26771 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-167289
	I1119 01:58:53.877067   26771 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/addons-167289/id_rsa Username:docker}
	I1119 01:58:53.974027   26771 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 01:58:53.974111   26771 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 01:58:54.002540   26771 cri.go:89] found id: "2b3c875b37c34a1aaf8e79a105e8e53fae42de67a4a4a839e6099e2e76e3ee93"
	I1119 01:58:54.002566   26771 cri.go:89] found id: "0057cb6b6d59c2d741aceb29df5771b62c0e82207a84f69ac5154387cbd84153"
	I1119 01:58:54.002572   26771 cri.go:89] found id: "387526d34b521aa97915fbe3e7854312807b05167ee255ed3d4dfbf358eb18ab"
	I1119 01:58:54.002578   26771 cri.go:89] found id: "f44d066a2880c3b89fb901e65c68edf6462e6f5ee4704d445d70bab540e140db"
	I1119 01:58:54.002582   26771 cri.go:89] found id: "d46baa577b02a9113e070c4a0480941b3b25bbbcce455137088c83a4b640d69f"
	I1119 01:58:54.002586   26771 cri.go:89] found id: "3e7307111a0a7ff2319df0e4a44e2dfdd6899963934cd8f81e97fe79104558fe"
	I1119 01:58:54.002597   26771 cri.go:89] found id: "e4525045db437311150f979f145e5df2b15dba4a85832f3b40b56d9e95456c85"
	I1119 01:58:54.002602   26771 cri.go:89] found id: "320316320c36a31575ed518280c787f454599b6f6db11a50abd8a2b071eab8ce"
	I1119 01:58:54.002606   26771 cri.go:89] found id: "77230f6072332b89f67e0a13fc3e2f90a73b685df581bca576a4aa98a0393837"
	I1119 01:58:54.002616   26771 cri.go:89] found id: "4c4521da22d2eb06ed45356e3e80a96ea0146646cd996eb249b4381da1a14456"
	I1119 01:58:54.002621   26771 cri.go:89] found id: "6c5d7a569a83aee258230f3e4101efcec68212fb81bd79541a6db05f42d1a635"
	I1119 01:58:54.002625   26771 cri.go:89] found id: "c45598982d3b30077574919aa2f884686b6cc7cef2866a9077b7aaa5b63ec66f"
	I1119 01:58:54.002628   26771 cri.go:89] found id: "fc07b5bfc14386b4ffa6dbdfb46e833fb2891243713de31478929edea09648dc"
	I1119 01:58:54.002633   26771 cri.go:89] found id: "ee1592f353982b5c192b5c5fede23bebda0067235ac78605adf1748bd5b7a544"
	I1119 01:58:54.002637   26771 cri.go:89] found id: "139e05f21703a92685b5f507816a8e38f914726f6ef0aa1b6cace7a7821c19fa"
	I1119 01:58:54.002652   26771 cri.go:89] found id: "28a0d1d0eb9de3e99faff9a60e034a22f2550e1d457b6e8c119f0069bb8c2dfb"
	I1119 01:58:54.002663   26771 cri.go:89] found id: "2d72765f224ed9ae16aaaf83613ea7845ee7fcd72d8bd4046856b1ae4dcbe2f6"
	I1119 01:58:54.002669   26771 cri.go:89] found id: "4f2a8fdefa3a921e57d39ce31bcba0663aef48e6cc812fddf8202cddb72408bc"
	I1119 01:58:54.002673   26771 cri.go:89] found id: "fc26589821a5be523a8e87f633779e2f88a1591ee75b009035709790a1af3b55"
	I1119 01:58:54.002677   26771 cri.go:89] found id: "76265018a97b054599e84272f03b9d9c1514d776b5f51869b14493d4baae8651"
	I1119 01:58:54.002681   26771 cri.go:89] found id: "2c19d6084be53e03b3e3ac1e879db7b855bd9d15968238b086b01debd7d271ef"
	I1119 01:58:54.002685   26771 cri.go:89] found id: "caf07801af8b3aabb5593451ab4fa658a97ce71604a86b5c1c38b3f9bde7e597"
	I1119 01:58:54.002689   26771 cri.go:89] found id: "32f9d499f63a2a4e0bec492e9ab5fd077b296adbf502d37c0814b310de798245"
	I1119 01:58:54.002693   26771 cri.go:89] found id: "c9553756abec4455cfb96c8ccb4ac24ebed143c24ffb9c08df7706240e482bac"
	I1119 01:58:54.002698   26771 cri.go:89] found id: ""
	I1119 01:58:54.002746   26771 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 01:58:54.019546   26771 out.go:203] 
	W1119 01:58:54.020732   26771 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T01:58:54Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T01:58:54Z" level=error msg="open /run/runc: no such file or directory"
	
	W1119 01:58:54.020751   26771 out.go:285] * 
	* 
	W1119 01:58:54.024733   26771 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1119 01:58:54.026000   26771 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-amd64 -p addons-167289 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (5.27s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (6.24s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-cmmr7" [a0938c28-80c2-4166-8f36-4747dc5172b0] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 6.003033552s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-167289 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-167289 addons disable amd-gpu-device-plugin --alsologtostderr -v=1: exit status 11 (232.015424ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 01:58:48.579607   26309 out.go:360] Setting OutFile to fd 1 ...
	I1119 01:58:48.579746   26309 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 01:58:48.579755   26309 out.go:374] Setting ErrFile to fd 2...
	I1119 01:58:48.579759   26309 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 01:58:48.579945   26309 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-11126/.minikube/bin
	I1119 01:58:48.580181   26309 mustload.go:66] Loading cluster: addons-167289
	I1119 01:58:48.580527   26309 config.go:182] Loaded profile config "addons-167289": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 01:58:48.580543   26309 addons.go:607] checking whether the cluster is paused
	I1119 01:58:48.580633   26309 config.go:182] Loaded profile config "addons-167289": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 01:58:48.580645   26309 host.go:66] Checking if "addons-167289" exists ...
	I1119 01:58:48.580995   26309 cli_runner.go:164] Run: docker container inspect addons-167289 --format={{.State.Status}}
	I1119 01:58:48.597863   26309 ssh_runner.go:195] Run: systemctl --version
	I1119 01:58:48.597921   26309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-167289
	I1119 01:58:48.613733   26309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/addons-167289/id_rsa Username:docker}
	I1119 01:58:48.706508   26309 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 01:58:48.706588   26309 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 01:58:48.733248   26309 cri.go:89] found id: "2b3c875b37c34a1aaf8e79a105e8e53fae42de67a4a4a839e6099e2e76e3ee93"
	I1119 01:58:48.733266   26309 cri.go:89] found id: "0057cb6b6d59c2d741aceb29df5771b62c0e82207a84f69ac5154387cbd84153"
	I1119 01:58:48.733270   26309 cri.go:89] found id: "387526d34b521aa97915fbe3e7854312807b05167ee255ed3d4dfbf358eb18ab"
	I1119 01:58:48.733273   26309 cri.go:89] found id: "f44d066a2880c3b89fb901e65c68edf6462e6f5ee4704d445d70bab540e140db"
	I1119 01:58:48.733275   26309 cri.go:89] found id: "d46baa577b02a9113e070c4a0480941b3b25bbbcce455137088c83a4b640d69f"
	I1119 01:58:48.733279   26309 cri.go:89] found id: "3e7307111a0a7ff2319df0e4a44e2dfdd6899963934cd8f81e97fe79104558fe"
	I1119 01:58:48.733281   26309 cri.go:89] found id: "e4525045db437311150f979f145e5df2b15dba4a85832f3b40b56d9e95456c85"
	I1119 01:58:48.733284   26309 cri.go:89] found id: "320316320c36a31575ed518280c787f454599b6f6db11a50abd8a2b071eab8ce"
	I1119 01:58:48.733288   26309 cri.go:89] found id: "77230f6072332b89f67e0a13fc3e2f90a73b685df581bca576a4aa98a0393837"
	I1119 01:58:48.733301   26309 cri.go:89] found id: "4c4521da22d2eb06ed45356e3e80a96ea0146646cd996eb249b4381da1a14456"
	I1119 01:58:48.733306   26309 cri.go:89] found id: "6c5d7a569a83aee258230f3e4101efcec68212fb81bd79541a6db05f42d1a635"
	I1119 01:58:48.733310   26309 cri.go:89] found id: "c45598982d3b30077574919aa2f884686b6cc7cef2866a9077b7aaa5b63ec66f"
	I1119 01:58:48.733317   26309 cri.go:89] found id: "fc07b5bfc14386b4ffa6dbdfb46e833fb2891243713de31478929edea09648dc"
	I1119 01:58:48.733322   26309 cri.go:89] found id: "ee1592f353982b5c192b5c5fede23bebda0067235ac78605adf1748bd5b7a544"
	I1119 01:58:48.733329   26309 cri.go:89] found id: "139e05f21703a92685b5f507816a8e38f914726f6ef0aa1b6cace7a7821c19fa"
	I1119 01:58:48.733337   26309 cri.go:89] found id: "28a0d1d0eb9de3e99faff9a60e034a22f2550e1d457b6e8c119f0069bb8c2dfb"
	I1119 01:58:48.733341   26309 cri.go:89] found id: "2d72765f224ed9ae16aaaf83613ea7845ee7fcd72d8bd4046856b1ae4dcbe2f6"
	I1119 01:58:48.733346   26309 cri.go:89] found id: "4f2a8fdefa3a921e57d39ce31bcba0663aef48e6cc812fddf8202cddb72408bc"
	I1119 01:58:48.733348   26309 cri.go:89] found id: "fc26589821a5be523a8e87f633779e2f88a1591ee75b009035709790a1af3b55"
	I1119 01:58:48.733350   26309 cri.go:89] found id: "76265018a97b054599e84272f03b9d9c1514d776b5f51869b14493d4baae8651"
	I1119 01:58:48.733356   26309 cri.go:89] found id: "2c19d6084be53e03b3e3ac1e879db7b855bd9d15968238b086b01debd7d271ef"
	I1119 01:58:48.733358   26309 cri.go:89] found id: "caf07801af8b3aabb5593451ab4fa658a97ce71604a86b5c1c38b3f9bde7e597"
	I1119 01:58:48.733360   26309 cri.go:89] found id: "32f9d499f63a2a4e0bec492e9ab5fd077b296adbf502d37c0814b310de798245"
	I1119 01:58:48.733363   26309 cri.go:89] found id: "c9553756abec4455cfb96c8ccb4ac24ebed143c24ffb9c08df7706240e482bac"
	I1119 01:58:48.733365   26309 cri.go:89] found id: ""
	I1119 01:58:48.733402   26309 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 01:58:48.746389   26309 out.go:203] 
	W1119 01:58:48.747613   26309 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T01:58:48Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T01:58:48Z" level=error msg="open /run/runc: no such file or directory"
	
	W1119 01:58:48.747629   26309 out.go:285] * 
	* 
	W1119 01:58:48.750542   26309 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1119 01:58:48.751701   26309 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable amd-gpu-device-plugin addon: args "out/minikube-linux-amd64 -p addons-167289 addons disable amd-gpu-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/AmdGpuDevicePlugin (6.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (602.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-345998 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-345998 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-7vs5f" [d28c14e3-04b9-495c-8b69-79f1306127b5] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-345998 -n functional-345998
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-11-19 02:14:02.855774669 +0000 UTC m=+1069.806954278
functional_test.go:1645: (dbg) Run:  kubectl --context functional-345998 describe po hello-node-connect-7d85dfc575-7vs5f -n default
functional_test.go:1645: (dbg) kubectl --context functional-345998 describe po hello-node-connect-7d85dfc575-7vs5f -n default:
Name:             hello-node-connect-7d85dfc575-7vs5f
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-345998/192.168.49.2
Start Time:       Wed, 19 Nov 2025 02:04:02 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.5
IPs:
IP:           10.244.0.5
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-r89zx (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-r89zx:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-7vs5f to functional-345998
Normal   Pulling    7m10s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m10s (x5 over 9m58s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m10s (x5 over 9m58s)   kubelet            Error: ErrImagePull
Normal   BackOff    4m46s (x21 over 9m58s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m46s (x21 over 9m58s)  kubelet            Error: ImagePullBackOff
functional_test.go:1645: (dbg) Run:  kubectl --context functional-345998 logs hello-node-connect-7d85dfc575-7vs5f -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-345998 logs hello-node-connect-7d85dfc575-7vs5f -n default: exit status 1 (66.381146ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-7vs5f" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-345998 logs hello-node-connect-7d85dfc575-7vs5f -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-345998 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-7vs5f
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-345998/192.168.49.2
Start Time:       Wed, 19 Nov 2025 02:04:02 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.5
IPs:
IP:           10.244.0.5
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-r89zx (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-r89zx:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-7vs5f to functional-345998
Normal   Pulling    7m11s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m11s (x5 over 9m59s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m11s (x5 over 9m59s)   kubelet            Error: ErrImagePull
Normal   BackOff    4m47s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m47s (x21 over 9m59s)  kubelet            Error: ImagePullBackOff

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-345998 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-345998 logs -l app=hello-node-connect: exit status 1 (57.967661ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-7vs5f" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-345998 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-345998 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.96.197.37
IPs:                      10.96.197.37
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  32614/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-345998
helpers_test.go:243: (dbg) docker inspect functional-345998:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ea46591ee166a5c50d3b350db7a4bc6ad96c83e4b22295e30693f985c805286f",
	        "Created": "2025-11-19T02:02:18.085811978Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 38152,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T02:02:18.11452071Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a368e3d71517ce17114afb6c9921965419df972dd0e2d32a9973a8946f0910a3",
	        "ResolvConfPath": "/var/lib/docker/containers/ea46591ee166a5c50d3b350db7a4bc6ad96c83e4b22295e30693f985c805286f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ea46591ee166a5c50d3b350db7a4bc6ad96c83e4b22295e30693f985c805286f/hostname",
	        "HostsPath": "/var/lib/docker/containers/ea46591ee166a5c50d3b350db7a4bc6ad96c83e4b22295e30693f985c805286f/hosts",
	        "LogPath": "/var/lib/docker/containers/ea46591ee166a5c50d3b350db7a4bc6ad96c83e4b22295e30693f985c805286f/ea46591ee166a5c50d3b350db7a4bc6ad96c83e4b22295e30693f985c805286f-json.log",
	        "Name": "/functional-345998",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-345998:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-345998",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ea46591ee166a5c50d3b350db7a4bc6ad96c83e4b22295e30693f985c805286f",
	                "LowerDir": "/var/lib/docker/overlay2/81df8c9eef84b9b718e52938efca38b66bd1f611baf9babab9e19b639bc0979b-init/diff:/var/lib/docker/overlay2/10dcbd04dd53463a9d8dc947ef22df9efa1b3a4d07707317c3869fa39d5b22a7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/81df8c9eef84b9b718e52938efca38b66bd1f611baf9babab9e19b639bc0979b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/81df8c9eef84b9b718e52938efca38b66bd1f611baf9babab9e19b639bc0979b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/81df8c9eef84b9b718e52938efca38b66bd1f611baf9babab9e19b639bc0979b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-345998",
	                "Source": "/var/lib/docker/volumes/functional-345998/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-345998",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-345998",
	                "name.minikube.sigs.k8s.io": "functional-345998",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "320c1898312cb2d88bb259182594ad7b23b5ff3e09392b913b008c4d41f5ddb0",
	            "SandboxKey": "/var/run/docker/netns/320c1898312c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "Networks": {
	                "functional-345998": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "91999f5a70e2799d2491982cd489fe90b5b61659d7c8ef7c5c9205a69a00efd7",
	                    "EndpointID": "09523981e1e8cda9ffaeb15b99b3cd922f2e87732d6b9870e1778887376adb0c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "7a:f1:df:5a:f0:c0",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-345998",
	                        "ea46591ee166"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-345998 -n functional-345998
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-345998 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-345998 logs -n 25: (1.2047904s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                            ARGS                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-345998 ssh sudo cat /etc/test/nested/copy/14634/hosts                                                           │ functional-345998 │ jenkins │ v1.37.0 │ 19 Nov 25 02:04 UTC │ 19 Nov 25 02:04 UTC │
	│ start          │ -p functional-345998 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio                  │ functional-345998 │ jenkins │ v1.37.0 │ 19 Nov 25 02:04 UTC │                     │
	│ ssh            │ functional-345998 ssh sudo cat /etc/ssl/certs/14634.pem                                                                    │ functional-345998 │ jenkins │ v1.37.0 │ 19 Nov 25 02:04 UTC │ 19 Nov 25 02:04 UTC │
	│ ssh            │ functional-345998 ssh sudo cat /usr/share/ca-certificates/14634.pem                                                        │ functional-345998 │ jenkins │ v1.37.0 │ 19 Nov 25 02:04 UTC │ 19 Nov 25 02:04 UTC │
	│ ssh            │ functional-345998 ssh sudo cat /etc/ssl/certs/51391683.0                                                                   │ functional-345998 │ jenkins │ v1.37.0 │ 19 Nov 25 02:04 UTC │ 19 Nov 25 02:04 UTC │
	│ ssh            │ functional-345998 ssh sudo cat /etc/ssl/certs/146342.pem                                                                   │ functional-345998 │ jenkins │ v1.37.0 │ 19 Nov 25 02:04 UTC │ 19 Nov 25 02:04 UTC │
	│ ssh            │ functional-345998 ssh sudo cat /usr/share/ca-certificates/146342.pem                                                       │ functional-345998 │ jenkins │ v1.37.0 │ 19 Nov 25 02:04 UTC │ 19 Nov 25 02:04 UTC │
	│ ssh            │ functional-345998 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                   │ functional-345998 │ jenkins │ v1.37.0 │ 19 Nov 25 02:04 UTC │ 19 Nov 25 02:04 UTC │
	│ cp             │ functional-345998 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                         │ functional-345998 │ jenkins │ v1.37.0 │ 19 Nov 25 02:04 UTC │ 19 Nov 25 02:04 UTC │
	│ ssh            │ functional-345998 ssh -n functional-345998 sudo cat /home/docker/cp-test.txt                                               │ functional-345998 │ jenkins │ v1.37.0 │ 19 Nov 25 02:04 UTC │ 19 Nov 25 02:04 UTC │
	│ cp             │ functional-345998 cp functional-345998:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2373048782/001/cp-test.txt │ functional-345998 │ jenkins │ v1.37.0 │ 19 Nov 25 02:04 UTC │ 19 Nov 25 02:04 UTC │
	│ ssh            │ functional-345998 ssh -n functional-345998 sudo cat /home/docker/cp-test.txt                                               │ functional-345998 │ jenkins │ v1.37.0 │ 19 Nov 25 02:04 UTC │ 19 Nov 25 02:04 UTC │
	│ cp             │ functional-345998 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                  │ functional-345998 │ jenkins │ v1.37.0 │ 19 Nov 25 02:04 UTC │ 19 Nov 25 02:04 UTC │
	│ ssh            │ functional-345998 ssh -n functional-345998 sudo cat /tmp/does/not/exist/cp-test.txt                                        │ functional-345998 │ jenkins │ v1.37.0 │ 19 Nov 25 02:04 UTC │ 19 Nov 25 02:04 UTC │
	│ license        │                                                                                                                            │ minikube          │ jenkins │ v1.37.0 │ 19 Nov 25 02:04 UTC │ 19 Nov 25 02:04 UTC │
	│ image          │ functional-345998 image ls --format short --alsologtostderr                                                                │ functional-345998 │ jenkins │ v1.37.0 │ 19 Nov 25 02:04 UTC │ 19 Nov 25 02:04 UTC │
	│ image          │ functional-345998 image ls --format yaml --alsologtostderr                                                                 │ functional-345998 │ jenkins │ v1.37.0 │ 19 Nov 25 02:04 UTC │ 19 Nov 25 02:04 UTC │
	│ ssh            │ functional-345998 ssh pgrep buildkitd                                                                                      │ functional-345998 │ jenkins │ v1.37.0 │ 19 Nov 25 02:04 UTC │                     │
	│ image          │ functional-345998 image build -t localhost/my-image:functional-345998 testdata/build --alsologtostderr                     │ functional-345998 │ jenkins │ v1.37.0 │ 19 Nov 25 02:04 UTC │ 19 Nov 25 02:04 UTC │
	│ image          │ functional-345998 image ls                                                                                                 │ functional-345998 │ jenkins │ v1.37.0 │ 19 Nov 25 02:04 UTC │ 19 Nov 25 02:04 UTC │
	│ image          │ functional-345998 image ls --format json --alsologtostderr                                                                 │ functional-345998 │ jenkins │ v1.37.0 │ 19 Nov 25 02:04 UTC │ 19 Nov 25 02:04 UTC │
	│ image          │ functional-345998 image ls --format table --alsologtostderr                                                                │ functional-345998 │ jenkins │ v1.37.0 │ 19 Nov 25 02:04 UTC │ 19 Nov 25 02:04 UTC │
	│ update-context │ functional-345998 update-context --alsologtostderr -v=2                                                                    │ functional-345998 │ jenkins │ v1.37.0 │ 19 Nov 25 02:04 UTC │ 19 Nov 25 02:04 UTC │
	│ update-context │ functional-345998 update-context --alsologtostderr -v=2                                                                    │ functional-345998 │ jenkins │ v1.37.0 │ 19 Nov 25 02:04 UTC │ 19 Nov 25 02:04 UTC │
	│ update-context │ functional-345998 update-context --alsologtostderr -v=2                                                                    │ functional-345998 │ jenkins │ v1.37.0 │ 19 Nov 25 02:04 UTC │ 19 Nov 25 02:04 UTC │
	└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 02:04:27
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 02:04:27.937918   52074 out.go:360] Setting OutFile to fd 1 ...
	I1119 02:04:27.938015   52074 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:04:27.938024   52074 out.go:374] Setting ErrFile to fd 2...
	I1119 02:04:27.938028   52074 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:04:27.938289   52074 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-11126/.minikube/bin
	I1119 02:04:27.938694   52074 out.go:368] Setting JSON to false
	I1119 02:04:27.939581   52074 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2815,"bootTime":1763515053,"procs":241,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 02:04:27.939666   52074 start.go:143] virtualization: kvm guest
	I1119 02:04:27.941159   52074 out.go:179] * [functional-345998] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1119 02:04:27.942531   52074 notify.go:221] Checking for updates...
	I1119 02:04:27.942563   52074 out.go:179]   - MINIKUBE_LOCATION=21924
	I1119 02:04:27.943677   52074 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 02:04:27.944793   52074 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21924-11126/kubeconfig
	I1119 02:04:27.945883   52074 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-11126/.minikube
	I1119 02:04:27.946901   52074 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1119 02:04:27.947935   52074 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 02:04:27.949241   52074 config.go:182] Loaded profile config "functional-345998": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:04:27.949674   52074 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 02:04:27.971517   52074 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1119 02:04:27.971624   52074 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 02:04:28.027963   52074 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-19 02:04:28.01755285 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652060160 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 02:04:28.028099   52074 docker.go:319] overlay module found
	I1119 02:04:28.030501   52074 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1119 02:04:28.031577   52074 start.go:309] selected driver: docker
	I1119 02:04:28.031598   52074 start.go:930] validating driver "docker" against &{Name:functional-345998 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-345998 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:04:28.031681   52074 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 02:04:28.033341   52074 out.go:203] 
	W1119 02:04:28.034409   52074 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1119 02:04:28.035651   52074 out.go:203] 
	
	
	==> CRI-O <==
	Nov 19 02:04:31 functional-345998 crio[3576]: time="2025-11-19T02:04:31.704681237Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:04:31 functional-345998 crio[3576]: time="2025-11-19T02:04:31.708471258Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:04:31 functional-345998 crio[3576]: time="2025-11-19T02:04:31.708632195Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/05e130af72660743a09f4eb04a881ba986c8639f33d0cc9d52deb7c9ca311202/merged/etc/group: no such file or directory"
	Nov 19 02:04:31 functional-345998 crio[3576]: time="2025-11-19T02:04:31.708921945Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:04:31 functional-345998 crio[3576]: time="2025-11-19T02:04:31.737390856Z" level=info msg="Created container 217489873e8d13bee1b4380a4f69aeb8b86762e3132ba0d5055a8079ef995823: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-pwdz7/kubernetes-dashboard" id=ebfb998e-f668-43a1-9726-461f3a2ba282 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 02:04:31 functional-345998 crio[3576]: time="2025-11-19T02:04:31.737901486Z" level=info msg="Starting container: 217489873e8d13bee1b4380a4f69aeb8b86762e3132ba0d5055a8079ef995823" id=7e8d770b-193e-4b59-af80-f9932a44f9b4 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 02:04:31 functional-345998 crio[3576]: time="2025-11-19T02:04:31.739515599Z" level=info msg="Started container" PID=7006 containerID=217489873e8d13bee1b4380a4f69aeb8b86762e3132ba0d5055a8079ef995823 description=kubernetes-dashboard/kubernetes-dashboard-855c9754f9-pwdz7/kubernetes-dashboard id=7e8d770b-193e-4b59-af80-f9932a44f9b4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=02ac77a626cd159190b70fe69cb70ddd9985b524f82a26506949fbdb21c3a23a
	Nov 19 02:04:40 functional-345998 crio[3576]: time="2025-11-19T02:04:40.041288593Z" level=info msg="Pulled image: docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da" id=4315c781-0cf7-4c44-b0df-624f61953fc4 name=/runtime.v1.ImageService/PullImage
	Nov 19 02:04:40 functional-345998 crio[3576]: time="2025-11-19T02:04:40.041958407Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=a77c2f01-4635-4b56-b601-70b9f3fd59f0 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 02:04:40 functional-345998 crio[3576]: time="2025-11-19T02:04:40.044517651Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=891d1bfc-f213-44c6-b560-ed55a7f5bf60 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 02:04:40 functional-345998 crio[3576]: time="2025-11-19T02:04:40.048721019Z" level=info msg="Creating container: default/mysql-5bb876957f-pktdw/mysql" id=52b49389-3de9-4f39-b736-e47598898035 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 02:04:40 functional-345998 crio[3576]: time="2025-11-19T02:04:40.048849541Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:04:40 functional-345998 crio[3576]: time="2025-11-19T02:04:40.054833658Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:04:40 functional-345998 crio[3576]: time="2025-11-19T02:04:40.055351965Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:04:40 functional-345998 crio[3576]: time="2025-11-19T02:04:40.086187172Z" level=info msg="Created container e63fe675ea27c834e17037008cdca758c9b29176da6d7e4908bce89855e51493: default/mysql-5bb876957f-pktdw/mysql" id=52b49389-3de9-4f39-b736-e47598898035 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 02:04:40 functional-345998 crio[3576]: time="2025-11-19T02:04:40.086733107Z" level=info msg="Starting container: e63fe675ea27c834e17037008cdca758c9b29176da6d7e4908bce89855e51493" id=2104141c-ad7c-4649-a3f2-caa4dfa09ee2 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 02:04:40 functional-345998 crio[3576]: time="2025-11-19T02:04:40.088581805Z" level=info msg="Started container" PID=7368 containerID=e63fe675ea27c834e17037008cdca758c9b29176da6d7e4908bce89855e51493 description=default/mysql-5bb876957f-pktdw/mysql id=2104141c-ad7c-4649-a3f2-caa4dfa09ee2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ebf1de5bc3758c92fe19e6d665773bfbe783488f9028098cc9ec8c216d71a8b2
	Nov 19 02:04:41 functional-345998 crio[3576]: time="2025-11-19T02:04:41.642921236Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=4fb795ff-e5ac-4cd2-bacb-8426eaa77fb5 name=/runtime.v1.ImageService/PullImage
	Nov 19 02:04:43 functional-345998 crio[3576]: time="2025-11-19T02:04:43.643057355Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=f686126f-984a-4cd4-a7c4-482f72234803 name=/runtime.v1.ImageService/PullImage
	Nov 19 02:05:23 functional-345998 crio[3576]: time="2025-11-19T02:05:23.64265229Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=3e501cfb-7e27-42c6-a6ee-a3bd128d552b name=/runtime.v1.ImageService/PullImage
	Nov 19 02:05:26 functional-345998 crio[3576]: time="2025-11-19T02:05:26.642786229Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=d0be03f3-65b5-4a66-8cfe-c1aa5cf47739 name=/runtime.v1.ImageService/PullImage
	Nov 19 02:06:52 functional-345998 crio[3576]: time="2025-11-19T02:06:52.642783078Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=684e2672-6aae-42d4-aee2-819564764b87 name=/runtime.v1.ImageService/PullImage
	Nov 19 02:06:54 functional-345998 crio[3576]: time="2025-11-19T02:06:54.643515456Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=702cf13a-6f81-400e-a531-31bd47141b49 name=/runtime.v1.ImageService/PullImage
	Nov 19 02:09:37 functional-345998 crio[3576]: time="2025-11-19T02:09:37.642468046Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=93ed0266-9291-4b9d-aa84-791a7fdb7f53 name=/runtime.v1.ImageService/PullImage
	Nov 19 02:09:39 functional-345998 crio[3576]: time="2025-11-19T02:09:39.643034746Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=bec27b8b-1819-46ca-8f3a-de6caf12728c name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	e63fe675ea27c       docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da                  9 minutes ago       Running             mysql                       0                   ebf1de5bc3758       mysql-5bb876957f-pktdw                       default
	217489873e8d1       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029         9 minutes ago       Running             kubernetes-dashboard        0                   02ac77a626cd1       kubernetes-dashboard-855c9754f9-pwdz7        kubernetes-dashboard
	871373024e2cf       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   9 minutes ago       Running             dashboard-metrics-scraper   0                   68071c403e0f2       dashboard-metrics-scraper-77bf4d6c4c-hvc7n   kubernetes-dashboard
	792079a011536       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998              9 minutes ago       Exited              mount-munger                0                   e60a0526ac7b4       busybox-mount                                default
	d421f5861264b       docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541                  9 minutes ago       Running             myfrontend                  0                   b07b663fbabeb       sp-pod                                       default
	ef9a1639fc5d5       docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7                  9 minutes ago       Running             nginx                       0                   80244703cd1a5       nginx-svc                                    default
	d1e94e8c9db62       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 10 minutes ago      Running             storage-provisioner         2                   868015d58979b       storage-provisioner                          kube-system
	3a8e14605673d       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                 10 minutes ago      Running             kube-controller-manager     2                   1e7bbae23f9bd       kube-controller-manager-functional-345998    kube-system
	a5d1d36df0362       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                 10 minutes ago      Running             kube-apiserver              0                   4e4627414df73       kube-apiserver-functional-345998             kube-system
	c9223b4cc283c       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 10 minutes ago      Running             etcd                        1                   b1ae8840a6fdf       etcd-functional-345998                       kube-system
	0d8843b35b67d       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                 10 minutes ago      Exited              kube-controller-manager     1                   1e7bbae23f9bd       kube-controller-manager-functional-345998    kube-system
	216660713a332       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                 10 minutes ago      Running             kube-scheduler              1                   1c8cae14e4088       kube-scheduler-functional-345998             kube-system
	02a68774491f4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 10 minutes ago      Exited              storage-provisioner         1                   868015d58979b       storage-provisioner                          kube-system
	819ff075be278       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                 10 minutes ago      Running             kube-proxy                  1                   65171bae14edf       kube-proxy-wx5j8                             kube-system
	9dc942c865ade       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 10 minutes ago      Running             kindnet-cni                 1                   c30aa8c1122d3       kindnet-rgzxn                                kube-system
	e88997e94e825       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 10 minutes ago      Running             coredns                     1                   5c2aee9049814       coredns-66bc5c9577-lqv58                     kube-system
	48f936120c47d       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 11 minutes ago      Exited              coredns                     0                   5c2aee9049814       coredns-66bc5c9577-lqv58                     kube-system
	a54b953d09fac       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                 11 minutes ago      Exited              kube-proxy                  0                   65171bae14edf       kube-proxy-wx5j8                             kube-system
	82117fe01939e       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 11 minutes ago      Exited              kindnet-cni                 0                   c30aa8c1122d3       kindnet-rgzxn                                kube-system
	3bea768152cbb       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                 11 minutes ago      Exited              kube-scheduler              0                   1c8cae14e4088       kube-scheduler-functional-345998             kube-system
	aff043653194b       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 11 minutes ago      Exited              etcd                        0                   b1ae8840a6fdf       etcd-functional-345998                       kube-system
	
	
	==> coredns [48f936120c47d51f0e8485570dbe665f1c09194d9a17ec2da11c9067b8ec685e] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:41946 - 48978 "HINFO IN 1499917474551776744.8575252594133591542. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.090591194s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [e88997e94e8250692d622a666b256bb8cccbac5ae55bd4384e4805ad27e33570] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:37193 - 18365 "HINFO IN 2306223770953781653.3495934522586191744. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.09910286s
	
	
	==> describe nodes <==
	Name:               functional-345998
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-345998
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ebd49efe8b7d44f89fc96e21beffcd64dfee8277
	                    minikube.k8s.io/name=functional-345998
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T02_02_31_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 02:02:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-345998
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 02:13:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 02:13:37 +0000   Wed, 19 Nov 2025 02:02:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 02:13:37 +0000   Wed, 19 Nov 2025 02:02:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 02:13:37 +0000   Wed, 19 Nov 2025 02:02:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 02:13:37 +0000   Wed, 19 Nov 2025 02:02:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-345998
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863340Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863340Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf10fb2f940d419c1d138723691cfee8
	  System UUID:                efba4833-1365-436b-9011-e1f555972197
	  Boot ID:                    b74e7f3d-91af-4c1b-82f6-1745dfd2e678
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-zt4m9                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-7d85dfc575-7vs5f           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-5bb876957f-pktdw                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     9m36s
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m47s
	  kube-system                 coredns-66bc5c9577-lqv58                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 etcd-functional-345998                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         11m
	  kube-system                 kindnet-rgzxn                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-functional-345998              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-345998     200m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-wx5j8                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-functional-345998              100m (1%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-hvc7n    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m38s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-pwdz7         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 11m                kube-proxy       
	  Normal  Starting                 10m                kube-proxy       
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node functional-345998 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node functional-345998 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x8 over 11m)  kubelet          Node functional-345998 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    11m                kubelet          Node functional-345998 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  11m                kubelet          Node functional-345998 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     11m                kubelet          Node functional-345998 status is now: NodeHasSufficientPID
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           11m                node-controller  Node functional-345998 event: Registered Node functional-345998 in Controller
	  Normal  NodeReady                11m                kubelet          Node functional-345998 status is now: NodeReady
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-345998 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-345998 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node functional-345998 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10m                node-controller  Node functional-345998 event: Registered Node functional-345998 in Controller
	
	
	==> dmesg <==
	[  +0.087110] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023778] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.840612] kauditd_printk_skb: 47 callbacks suppressed
	[Nov19 01:58] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c6 2a 10 38 98 ce 7e af 25 93 50 8b 08 00
	[  +1.036368] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c6 2a 10 38 98 ce 7e af 25 93 50 8b 08 00
	[  +1.023894] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: c6 2a 10 38 98 ce 7e af 25 93 50 8b 08 00
	[  +1.023887] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: c6 2a 10 38 98 ce 7e af 25 93 50 8b 08 00
	[  +1.023877] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: c6 2a 10 38 98 ce 7e af 25 93 50 8b 08 00
	[  +1.023887] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: c6 2a 10 38 98 ce 7e af 25 93 50 8b 08 00
	[  +2.047754] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c6 2a 10 38 98 ce 7e af 25 93 50 8b 08 00
	[Nov19 01:59] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c6 2a 10 38 98 ce 7e af 25 93 50 8b 08 00
	[  +8.383180] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: c6 2a 10 38 98 ce 7e af 25 93 50 8b 08 00
	[ +16.382291] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: c6 2a 10 38 98 ce 7e af 25 93 50 8b 08 00
	[ +32.252687] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: c6 2a 10 38 98 ce 7e af 25 93 50 8b 08 00
	
	
	==> etcd [aff043653194b3199ea4e03b8a84d29692901aa660fcbf58904792af8b2b72a3] <==
	{"level":"warn","ts":"2025-11-19T02:02:27.476897Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:02:27.482919Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:02:27.489496Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:02:27.495811Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:02:27.521561Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:02:27.533184Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:02:27.578103Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34688","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-19T02:03:22.411981Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-19T02:03:22.412064Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-345998","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-11-19T02:03:22.412171Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-19T02:03:22.413688Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-19T02:03:22.413736Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-19T02:03:22.413765Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"warn","ts":"2025-11-19T02:03:22.413839Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-19T02:03:22.413858Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2025-11-19T02:03:22.413877Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-11-19T02:03:22.413861Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-11-19T02:03:22.413895Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-19T02:03:22.413908Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-19T02:03:22.413883Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-19T02:03:22.413961Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-19T02:03:22.415659Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-11-19T02:03:22.415716Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-19T02:03:22.415740Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-11-19T02:03:22.415746Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-345998","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [c9223b4cc283c752e1b17bf1abd18663fbf42d4910e445f95353a0d2a864ef37] <==
	{"level":"warn","ts":"2025-11-19T02:03:45.611860Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:03:45.617479Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35784","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:03:45.623019Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:03:45.628808Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:03:45.634749Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:03:45.649538Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:03:45.655051Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:03:45.661485Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:03:45.667334Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:03:45.673114Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:03:45.679758Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:03:45.685767Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:03:45.691548Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:03:45.697376Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:03:45.703873Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:03:45.709744Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:03:45.715890Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:03:45.732148Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:03:45.737864Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:03:45.743388Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:03:45.788426Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36168","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-19T02:04:41.085889Z","caller":"traceutil/trace.go:172","msg":"trace[1616275952] transaction","detail":"{read_only:false; response_revision:853; number_of_response:1; }","duration":"187.730469ms","start":"2025-11-19T02:04:40.898135Z","end":"2025-11-19T02:04:41.085866Z","steps":["trace[1616275952] 'process raft request'  (duration: 187.630835ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T02:13:45.341614Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1144}
	{"level":"info","ts":"2025-11-19T02:13:45.362579Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1144,"took":"20.679821ms","hash":2747488442,"current-db-size-bytes":3436544,"current-db-size":"3.4 MB","current-db-size-in-use-bytes":1597440,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2025-11-19T02:13:45.362620Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2747488442,"revision":1144,"compact-revision":-1}
	
	
	==> kernel <==
	 02:14:04 up 56 min,  0 user,  load average: 0.03, 0.16, 0.29
	Linux functional-345998 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [82117fe01939e0b49acc29f0fb1294129214a31b3eca614871d05888ed7aa602] <==
	I1119 02:02:36.883140       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 02:02:36.883396       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1119 02:02:36.883532       1 main.go:148] setting mtu 1500 for CNI 
	I1119 02:02:36.883548       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 02:02:36.883568       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T02:02:37Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 02:02:37.083233       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 02:02:37.083266       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 02:02:37.083279       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 02:02:37.084564       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1119 02:02:37.471694       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1119 02:02:37.471727       1 metrics.go:72] Registering metrics
	I1119 02:02:37.471863       1 controller.go:711] "Syncing nftables rules"
	I1119 02:02:47.084186       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 02:02:47.084257       1 main.go:301] handling current node
	I1119 02:02:57.089908       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 02:02:57.089938       1 main.go:301] handling current node
	I1119 02:03:07.087845       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 02:03:07.087881       1 main.go:301] handling current node
	
	
	==> kindnet [9dc942c865adef4b1a519aea784fc50c0d58974063634dd4a15bec4ab78b3487] <==
	I1119 02:12:02.881606       1 main.go:301] handling current node
	I1119 02:12:12.881597       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 02:12:12.881628       1 main.go:301] handling current node
	I1119 02:12:22.881793       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 02:12:22.881852       1 main.go:301] handling current node
	I1119 02:12:32.881845       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 02:12:32.881880       1 main.go:301] handling current node
	I1119 02:12:42.882338       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 02:12:42.882368       1 main.go:301] handling current node
	I1119 02:12:52.883487       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 02:12:52.883540       1 main.go:301] handling current node
	I1119 02:13:02.882030       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 02:13:02.882064       1 main.go:301] handling current node
	I1119 02:13:12.881999       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 02:13:12.882026       1 main.go:301] handling current node
	I1119 02:13:22.883538       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 02:13:22.883577       1 main.go:301] handling current node
	I1119 02:13:32.881990       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 02:13:32.882018       1 main.go:301] handling current node
	I1119 02:13:42.882498       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 02:13:42.882537       1 main.go:301] handling current node
	I1119 02:13:52.884195       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 02:13:52.884229       1 main.go:301] handling current node
	I1119 02:14:02.882286       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 02:14:02.882322       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a5d1d36df03629b1bfc4f56168bb9c417c58942665b5d421e033e4a911a4d21f] <==
	I1119 02:03:46.233104       1 policy_source.go:240] refreshing policies
	I1119 02:03:46.241419       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 02:03:46.747469       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1119 02:03:47.122803       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1119 02:03:47.328400       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1119 02:03:47.329547       1 controller.go:667] quota admission added evaluator for: endpoints
	I1119 02:03:47.333290       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1119 02:03:47.977977       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1119 02:03:48.059490       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1119 02:03:48.100062       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1119 02:03:48.104501       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1119 02:03:49.588107       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1119 02:03:58.199899       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.101.160.129"}
	I1119 02:04:02.426709       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.102.78.212"}
	I1119 02:04:02.537532       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.96.197.37"}
	I1119 02:04:03.072666       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.103.211.110"}
	E1119 02:04:17.196651       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:45362: use of closed network connection
	E1119 02:04:25.029794       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:45408: use of closed network connection
	I1119 02:04:26.859806       1 controller.go:667] quota admission added evaluator for: namespaces
	I1119 02:04:26.950219       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.225.16"}
	I1119 02:04:26.964774       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.171.187"}
	I1119 02:04:28.176741       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.106.152.109"}
	E1119 02:04:47.300307       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:59960: use of closed network connection
	E1119 02:04:48.835939       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:59970: use of closed network connection
	I1119 02:13:46.154288       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [0d8843b35b67d3059cd1b4cf94a60ca620d81dfb02a9a628031b8d18897b0a94] <==
	I1119 02:03:23.334718       1 serving.go:386] Generated self-signed cert in-memory
	I1119 02:03:23.800826       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1119 02:03:23.800846       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 02:03:23.802336       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1119 02:03:23.802648       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1119 02:03:23.803410       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1119 02:03:23.803912       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1119 02:03:44.048737       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8441/healthz\": dial tcp 192.168.49.2:8441: connect: connection refused"
	
	
	==> kube-controller-manager [3a8e14605673d7208c30a309c957d691173a3caaf6560e767850c1bfeb29de5a] <==
	I1119 02:03:49.578040       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1119 02:03:49.578052       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1119 02:03:49.578064       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1119 02:03:49.578073       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1119 02:03:49.578083       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1119 02:03:49.578097       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1119 02:03:49.578084       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1119 02:03:49.578203       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1119 02:03:49.578264       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-345998"
	I1119 02:03:49.578449       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1119 02:03:49.579408       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1119 02:03:49.579447       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1119 02:03:49.579450       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1119 02:03:49.579462       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1119 02:03:49.584500       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 02:03:49.584553       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 02:03:49.585803       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1119 02:03:49.588627       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1119 02:03:49.601683       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1119 02:04:26.900903       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1119 02:04:26.904534       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1119 02:04:26.909016       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1119 02:04:26.910104       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1119 02:04:26.913394       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1119 02:04:26.917508       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [819ff075be2789e3dec9456240921bc8dd1edfb8052bfa9c181967d08d514c7d] <==
	E1119 02:03:12.587911       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-345998&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1119 02:03:13.616657       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-345998&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1119 02:03:16.580939       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-345998&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1119 02:03:32.800824       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-345998&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1119 02:03:44.048861       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-345998&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:49680->192.168.49.2:8441: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1119 02:03:58.787211       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 02:03:58.787253       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1119 02:03:58.787375       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 02:03:58.805406       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 02:03:58.805474       1 server_linux.go:132] "Using iptables Proxier"
	I1119 02:03:58.810697       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 02:03:58.811010       1 server.go:527] "Version info" version="v1.34.1"
	I1119 02:03:58.811049       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 02:03:58.812183       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 02:03:58.812205       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 02:03:58.812201       1 config.go:200] "Starting service config controller"
	I1119 02:03:58.812224       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 02:03:58.812243       1 config.go:106] "Starting endpoint slice config controller"
	I1119 02:03:58.812250       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 02:03:58.812276       1 config.go:309] "Starting node config controller"
	I1119 02:03:58.812317       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 02:03:58.812329       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 02:03:58.912814       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1119 02:03:58.912822       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1119 02:03:58.912851       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [a54b953d09fac3893f03881954818ded175cbe52cc874bf2e435fd9a92fa0e3b] <==
	I1119 02:02:36.732610       1 server_linux.go:53] "Using iptables proxy"
	I1119 02:02:36.787308       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 02:02:36.888345       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 02:02:36.888371       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1119 02:02:36.888513       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 02:02:36.905003       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 02:02:36.905050       1 server_linux.go:132] "Using iptables Proxier"
	I1119 02:02:36.910227       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 02:02:36.910561       1 server.go:527] "Version info" version="v1.34.1"
	I1119 02:02:36.910583       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 02:02:36.911918       1 config.go:106] "Starting endpoint slice config controller"
	I1119 02:02:36.911939       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 02:02:36.911960       1 config.go:200] "Starting service config controller"
	I1119 02:02:36.911994       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 02:02:36.912030       1 config.go:309] "Starting node config controller"
	I1119 02:02:36.912047       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 02:02:36.912054       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 02:02:36.912057       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 02:02:36.912064       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 02:02:37.012023       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1119 02:02:37.012119       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1119 02:02:37.012223       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [216660713a3325dc3b7a09171fec1b3f3e53e6a704e072090581780d97dd48a9] <==
	E1119 02:03:44.055793       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1119 02:03:44.056250       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1119 02:03:44.056616       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1119 02:03:44.056643       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1119 02:03:44.056808       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1119 02:03:46.138623       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1119 02:03:46.138623       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1119 02:03:46.138800       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1119 02:03:46.138884       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1119 02:03:46.138973       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1119 02:03:46.138974       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1119 02:03:46.139025       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1119 02:03:46.139042       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1119 02:03:46.139087       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1119 02:03:46.139129       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1119 02:03:46.139148       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1119 02:03:46.139242       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1119 02:03:46.139294       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1119 02:03:46.139339       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1119 02:03:46.139361       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1119 02:03:46.139411       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1119 02:03:46.139423       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1119 02:03:46.139490       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1119 02:03:46.150196       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1119 02:03:49.153761       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [3bea768152cbb58fd71bb8ccceb0857bb302423e3440b8b253b6677b89562687] <==
	E1119 02:02:27.951427       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1119 02:02:27.951469       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1119 02:02:27.951523       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1119 02:02:27.951559       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1119 02:02:27.951672       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1119 02:02:27.951672       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1119 02:02:27.951703       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1119 02:02:27.951716       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1119 02:02:28.758995       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1119 02:02:28.776015       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1119 02:02:28.781913       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1119 02:02:28.829797       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1119 02:02:28.843565       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1119 02:02:28.902751       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1119 02:02:28.944066       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1119 02:02:29.032749       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1119 02:02:29.149304       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1119 02:02:29.162168       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	I1119 02:02:29.547317       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 02:03:22.191865       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1119 02:03:22.191998       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 02:03:22.192061       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1119 02:03:22.192087       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1119 02:03:22.192110       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1119 02:03:22.192133       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Nov 19 02:11:28 functional-345998 kubelet[4164]: E1119 02:11:28.642724    4164 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-zt4m9" podUID="1b04b5e8-1487-45ba-b90a-7e17af415926"
	Nov 19 02:11:39 functional-345998 kubelet[4164]: E1119 02:11:39.642747    4164 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-7vs5f" podUID="d28c14e3-04b9-495c-8b69-79f1306127b5"
	Nov 19 02:11:41 functional-345998 kubelet[4164]: E1119 02:11:41.642597    4164 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-zt4m9" podUID="1b04b5e8-1487-45ba-b90a-7e17af415926"
	Nov 19 02:11:51 functional-345998 kubelet[4164]: E1119 02:11:51.642605    4164 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-7vs5f" podUID="d28c14e3-04b9-495c-8b69-79f1306127b5"
	Nov 19 02:11:53 functional-345998 kubelet[4164]: E1119 02:11:53.642854    4164 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-zt4m9" podUID="1b04b5e8-1487-45ba-b90a-7e17af415926"
	Nov 19 02:12:04 functional-345998 kubelet[4164]: E1119 02:12:04.643487    4164 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-7vs5f" podUID="d28c14e3-04b9-495c-8b69-79f1306127b5"
	Nov 19 02:12:06 functional-345998 kubelet[4164]: E1119 02:12:06.642109    4164 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-zt4m9" podUID="1b04b5e8-1487-45ba-b90a-7e17af415926"
	Nov 19 02:12:15 functional-345998 kubelet[4164]: E1119 02:12:15.643023    4164 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-7vs5f" podUID="d28c14e3-04b9-495c-8b69-79f1306127b5"
	Nov 19 02:12:20 functional-345998 kubelet[4164]: E1119 02:12:20.642191    4164 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-zt4m9" podUID="1b04b5e8-1487-45ba-b90a-7e17af415926"
	Nov 19 02:12:26 functional-345998 kubelet[4164]: E1119 02:12:26.642669    4164 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-7vs5f" podUID="d28c14e3-04b9-495c-8b69-79f1306127b5"
	Nov 19 02:12:35 functional-345998 kubelet[4164]: E1119 02:12:35.642874    4164 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-zt4m9" podUID="1b04b5e8-1487-45ba-b90a-7e17af415926"
	Nov 19 02:12:38 functional-345998 kubelet[4164]: E1119 02:12:38.642925    4164 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-7vs5f" podUID="d28c14e3-04b9-495c-8b69-79f1306127b5"
	Nov 19 02:12:48 functional-345998 kubelet[4164]: E1119 02:12:48.642936    4164 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-zt4m9" podUID="1b04b5e8-1487-45ba-b90a-7e17af415926"
	Nov 19 02:12:51 functional-345998 kubelet[4164]: E1119 02:12:51.642842    4164 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-7vs5f" podUID="d28c14e3-04b9-495c-8b69-79f1306127b5"
	Nov 19 02:12:59 functional-345998 kubelet[4164]: E1119 02:12:59.642991    4164 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-zt4m9" podUID="1b04b5e8-1487-45ba-b90a-7e17af415926"
	Nov 19 02:13:06 functional-345998 kubelet[4164]: E1119 02:13:06.642680    4164 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-7vs5f" podUID="d28c14e3-04b9-495c-8b69-79f1306127b5"
	Nov 19 02:13:11 functional-345998 kubelet[4164]: E1119 02:13:11.642730    4164 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-zt4m9" podUID="1b04b5e8-1487-45ba-b90a-7e17af415926"
	Nov 19 02:13:17 functional-345998 kubelet[4164]: E1119 02:13:17.642555    4164 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-7vs5f" podUID="d28c14e3-04b9-495c-8b69-79f1306127b5"
	Nov 19 02:13:24 functional-345998 kubelet[4164]: E1119 02:13:24.642714    4164 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-zt4m9" podUID="1b04b5e8-1487-45ba-b90a-7e17af415926"
	Nov 19 02:13:30 functional-345998 kubelet[4164]: E1119 02:13:30.642935    4164 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-7vs5f" podUID="d28c14e3-04b9-495c-8b69-79f1306127b5"
	Nov 19 02:13:37 functional-345998 kubelet[4164]: E1119 02:13:37.642628    4164 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-zt4m9" podUID="1b04b5e8-1487-45ba-b90a-7e17af415926"
	Nov 19 02:13:42 functional-345998 kubelet[4164]: E1119 02:13:42.642147    4164 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-7vs5f" podUID="d28c14e3-04b9-495c-8b69-79f1306127b5"
	Nov 19 02:13:52 functional-345998 kubelet[4164]: E1119 02:13:52.642841    4164 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-zt4m9" podUID="1b04b5e8-1487-45ba-b90a-7e17af415926"
	Nov 19 02:13:57 functional-345998 kubelet[4164]: E1119 02:13:57.642797    4164 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-7vs5f" podUID="d28c14e3-04b9-495c-8b69-79f1306127b5"
	Nov 19 02:14:03 functional-345998 kubelet[4164]: E1119 02:14:03.642699    4164 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-zt4m9" podUID="1b04b5e8-1487-45ba-b90a-7e17af415926"
	
	
	==> kubernetes-dashboard [217489873e8d13bee1b4380a4f69aeb8b86762e3132ba0d5055a8079ef995823] <==
	2025/11/19 02:04:31 Starting overwatch
	2025/11/19 02:04:31 Using namespace: kubernetes-dashboard
	2025/11/19 02:04:31 Using in-cluster config to connect to apiserver
	2025/11/19 02:04:31 Using secret token for csrf signing
	2025/11/19 02:04:31 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/19 02:04:31 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/19 02:04:31 Successful initial request to the apiserver, version: v1.34.1
	2025/11/19 02:04:31 Generating JWE encryption key
	2025/11/19 02:04:31 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/19 02:04:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/19 02:04:32 Initializing JWE encryption key from synchronized object
	2025/11/19 02:04:32 Creating in-cluster Sidecar client
	2025/11/19 02:04:32 Successful request to sidecar
	2025/11/19 02:04:32 Serving insecurely on HTTP port: 9090
	
	
	==> storage-provisioner [02a68774491f42199ef86e1ea4990314c25a936ce9297b4f10dd6c8a8072212b] <==
	I1119 02:03:12.502685       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1119 02:03:12.505384       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [d1e94e8c9db622687702514f2ba2d64ff9b267e227ff7732f8862038803bcea8] <==
	W1119 02:13:40.344908       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:13:42.347770       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:13:42.352648       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:13:44.355354       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:13:44.358912       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:13:46.361524       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:13:46.364838       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:13:48.367596       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:13:48.371042       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:13:50.374138       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:13:50.377555       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:13:52.380610       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:13:52.385278       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:13:54.387872       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:13:54.393243       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:13:56.395805       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:13:56.400511       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:13:58.403214       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:13:58.406249       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:14:00.409009       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:14:00.413603       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:14:02.416530       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:14:02.420377       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:14:04.423577       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:14:04.428353       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-345998 -n functional-345998
helpers_test.go:269: (dbg) Run:  kubectl --context functional-345998 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-zt4m9 hello-node-connect-7d85dfc575-7vs5f
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-345998 describe pod busybox-mount hello-node-75c85bcc94-zt4m9 hello-node-connect-7d85dfc575-7vs5f
helpers_test.go:290: (dbg) kubectl --context functional-345998 describe pod busybox-mount hello-node-75c85bcc94-zt4m9 hello-node-connect-7d85dfc575-7vs5f:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-345998/192.168.49.2
	Start Time:       Wed, 19 Nov 2025 02:04:18 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  mount-munger:
	    Container ID:  cri-o://792079a011536d2bcb0d2739003292c033dd8ed678aa9855850ca304362de582
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Wed, 19 Nov 2025 02:04:19 +0000
	      Finished:     Wed, 19 Nov 2025 02:04:19 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-r4sjm (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-r4sjm:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  9m47s  default-scheduler  Successfully assigned default/busybox-mount to functional-345998
	  Normal  Pulling    9m46s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     9m46s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 855ms (855ms including waiting). Image size: 4631262 bytes.
	  Normal  Created    9m46s  kubelet            Created container: mount-munger
	  Normal  Started    9m46s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-zt4m9
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-345998/192.168.49.2
	Start Time:       Wed, 19 Nov 2025 02:04:03 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qwqnx (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-qwqnx:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-zt4m9 to functional-345998
	  Normal   Pulling    7m11s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m11s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m11s (x5 over 10m)   kubelet            Error: ErrImagePull
	  Warning  Failed     4m51s (x21 over 10m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    2s (x42 over 10m)     kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-7vs5f
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-345998/192.168.49.2
	Start Time:       Wed, 19 Nov 2025 02:04:02 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:           10.244.0.5
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-r89zx (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-r89zx:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-7vs5f to functional-345998
	  Normal   Pulling    7m13s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m13s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m13s (x5 over 10m)   kubelet            Error: ErrImagePull
	  Normal   BackOff    4m49s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m49s (x21 over 10m)  kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (602.76s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-345998 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-345998 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-zt4m9" [1b04b5e8-1487-45ba-b90a-7e17af415926] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-345998 -n functional-345998
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-11-19 02:14:03.397324755 +0000 UTC m=+1070.348504373
functional_test.go:1460: (dbg) Run:  kubectl --context functional-345998 describe po hello-node-75c85bcc94-zt4m9 -n default
functional_test.go:1460: (dbg) kubectl --context functional-345998 describe po hello-node-75c85bcc94-zt4m9 -n default:
Name:             hello-node-75c85bcc94-zt4m9
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-345998/192.168.49.2
Start Time:       Wed, 19 Nov 2025 02:04:03 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qwqnx (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-qwqnx:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-75c85bcc94-zt4m9 to functional-345998
Normal   Pulling    7m9s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m9s (x5 over 9m59s)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m9s (x5 over 9m59s)    kubelet            Error: ErrImagePull
Normal   BackOff    4m49s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m49s (x21 over 9m59s)  kubelet            Error: ImagePullBackOff
functional_test.go:1460: (dbg) Run:  kubectl --context functional-345998 logs hello-node-75c85bcc94-zt4m9 -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-345998 logs hello-node-75c85bcc94-zt4m9 -n default: exit status 1 (62.33095ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-zt4m9" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-345998 logs hello-node-75c85bcc94-zt4m9 -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-345998 image load --daemon kicbase/echo-server:functional-345998 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-345998 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-345998" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-345998 image load --daemon kicbase/echo-server:functional-345998 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-345998 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-345998" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-345998
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-345998 image load --daemon kicbase/echo-server:functional-345998 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-345998 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-345998" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-345998 image save kicbase/echo-server:functional-345998 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-345998 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1119 02:04:15.308511   48219 out.go:360] Setting OutFile to fd 1 ...
	I1119 02:04:15.308820   48219 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:04:15.308831   48219 out.go:374] Setting ErrFile to fd 2...
	I1119 02:04:15.308835   48219 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:04:15.309076   48219 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-11126/.minikube/bin
	I1119 02:04:15.309620   48219 config.go:182] Loaded profile config "functional-345998": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:04:15.309726   48219 config.go:182] Loaded profile config "functional-345998": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:04:15.310118   48219 cli_runner.go:164] Run: docker container inspect functional-345998 --format={{.State.Status}}
	I1119 02:04:15.327121   48219 ssh_runner.go:195] Run: systemctl --version
	I1119 02:04:15.327285   48219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-345998
	I1119 02:04:15.343753   48219 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/functional-345998/id_rsa Username:docker}
	I1119 02:04:15.435461   48219 cache_images.go:291] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar
	W1119 02:04:15.435529   48219 cache_images.go:255] Failed to load cached images for "functional-345998": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar: no such file or directory
	I1119 02:04:15.435552   48219 cache_images.go:267] failed pushing to: functional-345998

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-345998
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-345998 image save --daemon kicbase/echo-server:functional-345998 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-345998
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-345998: exit status 1 (17.741292ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-345998

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-345998

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-345998 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-345998 service --namespace=default --https --url hello-node: exit status 115 (516.021824ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:31329
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-345998 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-345998 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-345998 service hello-node --url --format={{.IP}}: exit status 115 (520.229452ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-345998 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-345998 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-345998 service hello-node --url: exit status 115 (517.567314ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:31329
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-345998 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31329
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.52s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (1.9s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-955461 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p json-output-955461 --output=json --user=testUser: exit status 80 (1.902240126s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"dc775e2f-2274-4cc1-aa2c-1136ac3ffd8c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-955461 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"de01cc85-2696-438e-b55d-c992c2145dc6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-11-19T02:23:23Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"7dad3c84-9e9b-4aff-ba7a-fbfeeb131a15","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 pause -p json-output-955461 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (1.90s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.61s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-955461 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 unpause -p json-output-955461 --output=json --user=testUser: exit status 80 (1.614142216s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"364d76b6-916f-4f0e-a572-d0697651ea2e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-955461 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"a291c7bf-026b-4101-86d1-0a9cb443ff42","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-11-19T02:23:24Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"223cc4e9-b07c-4f33-9b16-043557baae00","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 unpause -p json-output-955461 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.61s)

                                                
                                    
x
+
TestPause/serial/Pause (7.85s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-881232 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p pause-881232 --alsologtostderr -v=5: exit status 80 (2.462577867s)

                                                
                                                
-- stdout --
	* Pausing node pause-881232 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 02:36:17.363199  202075 out.go:360] Setting OutFile to fd 1 ...
	I1119 02:36:17.363312  202075 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:36:17.363322  202075 out.go:374] Setting ErrFile to fd 2...
	I1119 02:36:17.363326  202075 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:36:17.363581  202075 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-11126/.minikube/bin
	I1119 02:36:17.363845  202075 out.go:368] Setting JSON to false
	I1119 02:36:17.363896  202075 mustload.go:66] Loading cluster: pause-881232
	I1119 02:36:17.364291  202075 config.go:182] Loaded profile config "pause-881232": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:36:17.364679  202075 cli_runner.go:164] Run: docker container inspect pause-881232 --format={{.State.Status}}
	I1119 02:36:17.383936  202075 host.go:66] Checking if "pause-881232" exists ...
	I1119 02:36:17.384274  202075 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 02:36:17.442165  202075 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:85 SystemTime:2025-11-19 02:36:17.43282832 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652060160 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 02:36:17.442837  202075 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-881232 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1119 02:36:17.444415  202075 out.go:179] * Pausing node pause-881232 ... 
	I1119 02:36:17.445542  202075 host.go:66] Checking if "pause-881232" exists ...
	I1119 02:36:17.445813  202075 ssh_runner.go:195] Run: systemctl --version
	I1119 02:36:17.445854  202075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-881232
	I1119 02:36:17.462753  202075 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32988 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/pause-881232/id_rsa Username:docker}
	I1119 02:36:17.554525  202075 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 02:36:17.566637  202075 pause.go:52] kubelet running: true
	I1119 02:36:17.566699  202075 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1119 02:36:17.693757  202075 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1119 02:36:17.693831  202075 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1119 02:36:17.756655  202075 cri.go:89] found id: "69e0618cecb035be983b2a6678f6e14c1b34ab986a8689b90f7c7144471b35b9"
	I1119 02:36:17.756677  202075 cri.go:89] found id: "511e241319afaf190c8ae5fbaab63004ea6b45dbacde334fa1fa419fb575a64d"
	I1119 02:36:17.756681  202075 cri.go:89] found id: "320a34763e28b598bbe46ea80965cb57c40ec57cc3b3763cdac4edbbbd143b2e"
	I1119 02:36:17.756685  202075 cri.go:89] found id: "e94cef8f5297a95085b21790340c5af52657f36386ceb1facf88e2bd446b4068"
	I1119 02:36:17.756687  202075 cri.go:89] found id: "caa25c304f3729a515d82f1166a519c66499e8206ee80da50fdc4a55a960dd9b"
	I1119 02:36:17.756690  202075 cri.go:89] found id: "6036077de493ec8ea2df1816e33a020594bb347d174a707c3af9a3205f778d4b"
	I1119 02:36:17.756692  202075 cri.go:89] found id: "4be7b1d78923f1e87d3fe58c5f214a995d5ce37bb7387073eff5b7b26fc63630"
	I1119 02:36:17.756695  202075 cri.go:89] found id: ""
	I1119 02:36:17.756734  202075 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 02:36:17.767485  202075 retry.go:31] will retry after 215.836795ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:36:17Z" level=error msg="open /run/runc: no such file or directory"
	I1119 02:36:17.983974  202075 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 02:36:17.996862  202075 pause.go:52] kubelet running: false
	I1119 02:36:17.996919  202075 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1119 02:36:18.104140  202075 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1119 02:36:18.104258  202075 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1119 02:36:18.167487  202075 cri.go:89] found id: "69e0618cecb035be983b2a6678f6e14c1b34ab986a8689b90f7c7144471b35b9"
	I1119 02:36:18.167510  202075 cri.go:89] found id: "511e241319afaf190c8ae5fbaab63004ea6b45dbacde334fa1fa419fb575a64d"
	I1119 02:36:18.167514  202075 cri.go:89] found id: "320a34763e28b598bbe46ea80965cb57c40ec57cc3b3763cdac4edbbbd143b2e"
	I1119 02:36:18.167516  202075 cri.go:89] found id: "e94cef8f5297a95085b21790340c5af52657f36386ceb1facf88e2bd446b4068"
	I1119 02:36:18.167519  202075 cri.go:89] found id: "caa25c304f3729a515d82f1166a519c66499e8206ee80da50fdc4a55a960dd9b"
	I1119 02:36:18.167521  202075 cri.go:89] found id: "6036077de493ec8ea2df1816e33a020594bb347d174a707c3af9a3205f778d4b"
	I1119 02:36:18.167524  202075 cri.go:89] found id: "4be7b1d78923f1e87d3fe58c5f214a995d5ce37bb7387073eff5b7b26fc63630"
	I1119 02:36:18.167544  202075 cri.go:89] found id: ""
	I1119 02:36:18.167583  202075 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 02:36:18.180089  202075 retry.go:31] will retry after 409.898363ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:36:18Z" level=error msg="open /run/runc: no such file or directory"
	I1119 02:36:18.590571  202075 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 02:36:18.603799  202075 pause.go:52] kubelet running: false
	I1119 02:36:18.603894  202075 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1119 02:36:18.714508  202075 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1119 02:36:18.714580  202075 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1119 02:36:18.787462  202075 cri.go:89] found id: "69e0618cecb035be983b2a6678f6e14c1b34ab986a8689b90f7c7144471b35b9"
	I1119 02:36:18.787484  202075 cri.go:89] found id: "511e241319afaf190c8ae5fbaab63004ea6b45dbacde334fa1fa419fb575a64d"
	I1119 02:36:18.787489  202075 cri.go:89] found id: "320a34763e28b598bbe46ea80965cb57c40ec57cc3b3763cdac4edbbbd143b2e"
	I1119 02:36:18.787492  202075 cri.go:89] found id: "e94cef8f5297a95085b21790340c5af52657f36386ceb1facf88e2bd446b4068"
	I1119 02:36:18.787495  202075 cri.go:89] found id: "caa25c304f3729a515d82f1166a519c66499e8206ee80da50fdc4a55a960dd9b"
	I1119 02:36:18.787497  202075 cri.go:89] found id: "6036077de493ec8ea2df1816e33a020594bb347d174a707c3af9a3205f778d4b"
	I1119 02:36:18.787500  202075 cri.go:89] found id: "4be7b1d78923f1e87d3fe58c5f214a995d5ce37bb7387073eff5b7b26fc63630"
	I1119 02:36:18.787502  202075 cri.go:89] found id: ""
	I1119 02:36:18.787550  202075 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 02:36:18.799630  202075 retry.go:31] will retry after 735.850419ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:36:18Z" level=error msg="open /run/runc: no such file or directory"
	I1119 02:36:19.536594  202075 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 02:36:19.551315  202075 pause.go:52] kubelet running: false
	I1119 02:36:19.551354  202075 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1119 02:36:19.663861  202075 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1119 02:36:19.663932  202075 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1119 02:36:19.735347  202075 cri.go:89] found id: "69e0618cecb035be983b2a6678f6e14c1b34ab986a8689b90f7c7144471b35b9"
	I1119 02:36:19.735375  202075 cri.go:89] found id: "511e241319afaf190c8ae5fbaab63004ea6b45dbacde334fa1fa419fb575a64d"
	I1119 02:36:19.735381  202075 cri.go:89] found id: "320a34763e28b598bbe46ea80965cb57c40ec57cc3b3763cdac4edbbbd143b2e"
	I1119 02:36:19.735385  202075 cri.go:89] found id: "e94cef8f5297a95085b21790340c5af52657f36386ceb1facf88e2bd446b4068"
	I1119 02:36:19.735389  202075 cri.go:89] found id: "caa25c304f3729a515d82f1166a519c66499e8206ee80da50fdc4a55a960dd9b"
	I1119 02:36:19.735393  202075 cri.go:89] found id: "6036077de493ec8ea2df1816e33a020594bb347d174a707c3af9a3205f778d4b"
	I1119 02:36:19.735397  202075 cri.go:89] found id: "4be7b1d78923f1e87d3fe58c5f214a995d5ce37bb7387073eff5b7b26fc63630"
	I1119 02:36:19.735400  202075 cri.go:89] found id: ""
	I1119 02:36:19.735473  202075 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 02:36:19.750683  202075 out.go:203] 
	W1119 02:36:19.751954  202075 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:36:19Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:36:19Z" level=error msg="open /run/runc: no such file or directory"
	
	W1119 02:36:19.751968  202075 out.go:285] * 
	* 
	W1119 02:36:19.756514  202075 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1119 02:36:19.757884  202075 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-amd64 pause -p pause-881232 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-881232
helpers_test.go:243: (dbg) docker inspect pause-881232:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "86f5efc2e4905e252d20151131e4f1a5433d6132a3d76139317e8d299f668d70",
	        "Created": "2025-11-19T02:35:29.052359651Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 186515,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T02:35:29.104580014Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a368e3d71517ce17114afb6c9921965419df972dd0e2d32a9973a8946f0910a3",
	        "ResolvConfPath": "/var/lib/docker/containers/86f5efc2e4905e252d20151131e4f1a5433d6132a3d76139317e8d299f668d70/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/86f5efc2e4905e252d20151131e4f1a5433d6132a3d76139317e8d299f668d70/hostname",
	        "HostsPath": "/var/lib/docker/containers/86f5efc2e4905e252d20151131e4f1a5433d6132a3d76139317e8d299f668d70/hosts",
	        "LogPath": "/var/lib/docker/containers/86f5efc2e4905e252d20151131e4f1a5433d6132a3d76139317e8d299f668d70/86f5efc2e4905e252d20151131e4f1a5433d6132a3d76139317e8d299f668d70-json.log",
	        "Name": "/pause-881232",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-881232:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-881232",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "86f5efc2e4905e252d20151131e4f1a5433d6132a3d76139317e8d299f668d70",
	                "LowerDir": "/var/lib/docker/overlay2/881305dbc525f9bbedffba7ad65358525a65ea468b24a884cd17ad87e6c69102-init/diff:/var/lib/docker/overlay2/10dcbd04dd53463a9d8dc947ef22df9efa1b3a4d07707317c3869fa39d5b22a7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/881305dbc525f9bbedffba7ad65358525a65ea468b24a884cd17ad87e6c69102/merged",
	                "UpperDir": "/var/lib/docker/overlay2/881305dbc525f9bbedffba7ad65358525a65ea468b24a884cd17ad87e6c69102/diff",
	                "WorkDir": "/var/lib/docker/overlay2/881305dbc525f9bbedffba7ad65358525a65ea468b24a884cd17ad87e6c69102/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "pause-881232",
	                "Source": "/var/lib/docker/volumes/pause-881232/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-881232",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-881232",
	                "name.minikube.sigs.k8s.io": "pause-881232",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "925efb81d39fd592ddf0d88b17d407f4b0c7de84ddf2effde849fe77963eaf09",
	            "SandboxKey": "/var/run/docker/netns/925efb81d39f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32988"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32989"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32992"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32990"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32991"
	                    }
	                ]
	            },
	            "Networks": {
	                "pause-881232": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7c5c0c9647194999713e748225f81c7859136d53330b468e9c69564dd49c80ea",
	                    "EndpointID": "52a07fd5441349fa9a6a3573f8fc8251707a6921e47af163f8dd23455b66c708",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "82:ba:4c:0d:a4:97",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-881232",
	                        "86f5efc2e490"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-881232 -n pause-881232
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-881232 -n pause-881232: exit status 2 (337.648284ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-881232 logs -n 25
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                           ARGS                                                                                                            │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ stop    │ -p scheduled-stop-693027 --schedule 15s -v=5 --alsologtostderr                                                                                                                                                            │ scheduled-stop-693027       │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │                     │
	│ stop    │ -p scheduled-stop-693027 --schedule 15s -v=5 --alsologtostderr                                                                                                                                                            │ scheduled-stop-693027       │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │                     │
	│ stop    │ -p scheduled-stop-693027 --cancel-scheduled                                                                                                                                                                               │ scheduled-stop-693027       │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ stop    │ -p scheduled-stop-693027 --schedule 15s -v=5 --alsologtostderr                                                                                                                                                            │ scheduled-stop-693027       │ jenkins │ v1.37.0 │ 19 Nov 25 02:34 UTC │                     │
	│ stop    │ -p scheduled-stop-693027 --schedule 15s -v=5 --alsologtostderr                                                                                                                                                            │ scheduled-stop-693027       │ jenkins │ v1.37.0 │ 19 Nov 25 02:34 UTC │                     │
	│ stop    │ -p scheduled-stop-693027 --schedule 15s -v=5 --alsologtostderr                                                                                                                                                            │ scheduled-stop-693027       │ jenkins │ v1.37.0 │ 19 Nov 25 02:34 UTC │ 19 Nov 25 02:34 UTC │
	│ delete  │ -p scheduled-stop-693027                                                                                                                                                                                                  │ scheduled-stop-693027       │ jenkins │ v1.37.0 │ 19 Nov 25 02:35 UTC │ 19 Nov 25 02:35 UTC │
	│ start   │ -p insufficient-storage-609759 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio                                                                                                          │ insufficient-storage-609759 │ jenkins │ v1.37.0 │ 19 Nov 25 02:35 UTC │                     │
	│ delete  │ -p insufficient-storage-609759                                                                                                                                                                                            │ insufficient-storage-609759 │ jenkins │ v1.37.0 │ 19 Nov 25 02:35 UTC │ 19 Nov 25 02:35 UTC │
	│ start   │ -p force-systemd-env-924069 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                │ force-systemd-env-924069    │ jenkins │ v1.37.0 │ 19 Nov 25 02:35 UTC │ 19 Nov 25 02:35 UTC │
	│ start   │ -p force-systemd-flag-103780 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                               │ force-systemd-flag-103780   │ jenkins │ v1.37.0 │ 19 Nov 25 02:35 UTC │ 19 Nov 25 02:35 UTC │
	│ start   │ -p pause-881232 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                                                                                                 │ pause-881232                │ jenkins │ v1.37.0 │ 19 Nov 25 02:35 UTC │ 19 Nov 25 02:36 UTC │
	│ start   │ -p offline-crio-852644 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio                                                                                                         │ offline-crio-852644         │ jenkins │ v1.37.0 │ 19 Nov 25 02:35 UTC │                     │
	│ ssh     │ force-systemd-flag-103780 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                      │ force-systemd-flag-103780   │ jenkins │ v1.37.0 │ 19 Nov 25 02:35 UTC │ 19 Nov 25 02:35 UTC │
	│ delete  │ -p force-systemd-flag-103780                                                                                                                                                                                              │ force-systemd-flag-103780   │ jenkins │ v1.37.0 │ 19 Nov 25 02:35 UTC │ 19 Nov 25 02:35 UTC │
	│ delete  │ -p force-systemd-env-924069                                                                                                                                                                                               │ force-systemd-env-924069    │ jenkins │ v1.37.0 │ 19 Nov 25 02:35 UTC │ 19 Nov 25 02:35 UTC │
	│ start   │ -p cert-expiration-455061 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                    │ cert-expiration-455061      │ jenkins │ v1.37.0 │ 19 Nov 25 02:35 UTC │ 19 Nov 25 02:36 UTC │
	│ start   │ -p cert-options-336989 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio │ cert-options-336989         │ jenkins │ v1.37.0 │ 19 Nov 25 02:35 UTC │ 19 Nov 25 02:36 UTC │
	│ start   │ -p pause-881232 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                          │ pause-881232                │ jenkins │ v1.37.0 │ 19 Nov 25 02:36 UTC │ 19 Nov 25 02:36 UTC │
	│ ssh     │ cert-options-336989 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                               │ cert-options-336989         │ jenkins │ v1.37.0 │ 19 Nov 25 02:36 UTC │ 19 Nov 25 02:36 UTC │
	│ ssh     │ -p cert-options-336989 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                             │ cert-options-336989         │ jenkins │ v1.37.0 │ 19 Nov 25 02:36 UTC │ 19 Nov 25 02:36 UTC │
	│ delete  │ -p cert-options-336989                                                                                                                                                                                                    │ cert-options-336989         │ jenkins │ v1.37.0 │ 19 Nov 25 02:36 UTC │ 19 Nov 25 02:36 UTC │
	│ pause   │ -p pause-881232 --alsologtostderr -v=5                                                                                                                                                                                    │ pause-881232                │ jenkins │ v1.37.0 │ 19 Nov 25 02:36 UTC │                     │
	│ start   │ -p NoKubernetes-358955 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                             │ NoKubernetes-358955         │ jenkins │ v1.37.0 │ 19 Nov 25 02:36 UTC │                     │
	│ start   │ -p NoKubernetes-358955 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                     │ NoKubernetes-358955         │ jenkins │ v1.37.0 │ 19 Nov 25 02:36 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 02:36:19
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 02:36:19.347831  202600 out.go:360] Setting OutFile to fd 1 ...
	I1119 02:36:19.347915  202600 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:36:19.347919  202600 out.go:374] Setting ErrFile to fd 2...
	I1119 02:36:19.347922  202600 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:36:19.348111  202600 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-11126/.minikube/bin
	I1119 02:36:19.348506  202600 out.go:368] Setting JSON to false
	I1119 02:36:19.349506  202600 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4726,"bootTime":1763515053,"procs":298,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 02:36:19.349592  202600 start.go:143] virtualization: kvm guest
	I1119 02:36:19.352269  202600 out.go:179] * [NoKubernetes-358955] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1119 02:36:19.353565  202600 out.go:179]   - MINIKUBE_LOCATION=21924
	I1119 02:36:19.353583  202600 notify.go:221] Checking for updates...
	I1119 02:36:19.355891  202600 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 02:36:19.356928  202600 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21924-11126/kubeconfig
	I1119 02:36:19.357995  202600 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-11126/.minikube
	I1119 02:36:19.359018  202600 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1119 02:36:19.360011  202600 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 02:36:19.361407  202600 config.go:182] Loaded profile config "cert-expiration-455061": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:36:19.361531  202600 config.go:182] Loaded profile config "offline-crio-852644": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:36:19.361634  202600 config.go:182] Loaded profile config "pause-881232": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:36:19.361723  202600 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 02:36:19.384063  202600 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1119 02:36:19.384150  202600 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 02:36:19.439757  202600 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-19 02:36:19.430072194 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652060160 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 02:36:19.439858  202600 docker.go:319] overlay module found
	I1119 02:36:19.441507  202600 out.go:179] * Using the docker driver based on user configuration
	I1119 02:36:19.442579  202600 start.go:309] selected driver: docker
	I1119 02:36:19.442593  202600 start.go:930] validating driver "docker" against <nil>
	I1119 02:36:19.442613  202600 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 02:36:19.443154  202600 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 02:36:19.499374  202600 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-19 02:36:19.48921501 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652060160 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 02:36:19.499555  202600 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1119 02:36:19.499743  202600 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1119 02:36:19.501389  202600 out.go:179] * Using Docker driver with root privileges
	I1119 02:36:19.502453  202600 cni.go:84] Creating CNI manager for ""
	I1119 02:36:19.502532  202600 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 02:36:19.502548  202600 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1119 02:36:19.502622  202600 start.go:353] cluster config:
	{Name:NoKubernetes-358955 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:NoKubernetes-358955 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:36:19.503706  202600 out.go:179] * Starting "NoKubernetes-358955" primary control-plane node in "NoKubernetes-358955" cluster
	I1119 02:36:19.504878  202600 cache.go:134] Beginning downloading kic base image for docker with crio
	I1119 02:36:19.506025  202600 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1119 02:36:19.507010  202600 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 02:36:19.507040  202600 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21924-11126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1119 02:36:19.507056  202600 cache.go:65] Caching tarball of preloaded images
	I1119 02:36:19.507110  202600 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1119 02:36:19.507135  202600 preload.go:238] Found /home/jenkins/minikube-integration/21924-11126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1119 02:36:19.507150  202600 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1119 02:36:19.507252  202600 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/NoKubernetes-358955/config.json ...
	I1119 02:36:19.507276  202600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/NoKubernetes-358955/config.json: {Name:mk328c666f5ac3d02e64668e411b2ecd63ab7065 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:36:19.526522  202600 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1119 02:36:19.526543  202600 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1119 02:36:19.526559  202600 cache.go:243] Successfully downloaded all kic artifacts
	I1119 02:36:19.526582  202600 start.go:360] acquireMachinesLock for NoKubernetes-358955: {Name:mkdd15688f89c119a76a0016afd599ec23aa4449 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 02:36:19.526659  202600 start.go:364] duration metric: took 64.106µs to acquireMachinesLock for "NoKubernetes-358955"
	I1119 02:36:19.526680  202600 start.go:93] Provisioning new machine with config: &{Name:NoKubernetes-358955 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:NoKubernetes-358955 Namespace:default APIServerHAVIP: APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 02:36:19.526731  202600 start.go:125] createHost starting for "" (driver="docker")
	
	
	==> CRI-O <==
	Nov 19 02:36:14 pause-881232 crio[2190]: time="2025-11-19T02:36:14.099376209Z" level=info msg="RDT not available in the host system"
	Nov 19 02:36:14 pause-881232 crio[2190]: time="2025-11-19T02:36:14.099388707Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Nov 19 02:36:14 pause-881232 crio[2190]: time="2025-11-19T02:36:14.100278129Z" level=info msg="Conmon does support the --sync option"
	Nov 19 02:36:14 pause-881232 crio[2190]: time="2025-11-19T02:36:14.10030337Z" level=info msg="Conmon does support the --log-global-size-max option"
	Nov 19 02:36:14 pause-881232 crio[2190]: time="2025-11-19T02:36:14.100321045Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Nov 19 02:36:14 pause-881232 crio[2190]: time="2025-11-19T02:36:14.101183355Z" level=info msg="Conmon does support the --sync option"
	Nov 19 02:36:14 pause-881232 crio[2190]: time="2025-11-19T02:36:14.101204599Z" level=info msg="Conmon does support the --log-global-size-max option"
	Nov 19 02:36:14 pause-881232 crio[2190]: time="2025-11-19T02:36:14.105275148Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 02:36:14 pause-881232 crio[2190]: time="2025-11-19T02:36:14.105298117Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 19 02:36:14 pause-881232 crio[2190]: time="2025-11-19T02:36:14.106098454Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/
cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"
/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Nov 19 02:36:14 pause-881232 crio[2190]: time="2025-11-19T02:36:14.106592732Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Nov 19 02:36:14 pause-881232 crio[2190]: time="2025-11-19T02:36:14.106651501Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Nov 19 02:36:14 pause-881232 crio[2190]: time="2025-11-19T02:36:14.188326151Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-9z4kk Namespace:kube-system ID:459dd6ae14656b22173e4ca65f1b3bfbe236e25c462646b70be820878c3c99b9 UID:f4ad03b0-bc23-416f-9fd3-ec5b3301649f NetNS:/var/run/netns/52135fa1-9602-4fc3-b1d2-dab37944dab0 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000132980}] Aliases:map[]}"
	Nov 19 02:36:14 pause-881232 crio[2190]: time="2025-11-19T02:36:14.188514056Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-9z4kk for CNI network kindnet (type=ptp)"
	Nov 19 02:36:14 pause-881232 crio[2190]: time="2025-11-19T02:36:14.188935861Z" level=info msg="Registered SIGHUP reload watcher"
	Nov 19 02:36:14 pause-881232 crio[2190]: time="2025-11-19T02:36:14.188955035Z" level=info msg="Starting seccomp notifier watcher"
	Nov 19 02:36:14 pause-881232 crio[2190]: time="2025-11-19T02:36:14.188994688Z" level=info msg="Create NRI interface"
	Nov 19 02:36:14 pause-881232 crio[2190]: time="2025-11-19T02:36:14.189096395Z" level=info msg="built-in NRI default validator is disabled"
	Nov 19 02:36:14 pause-881232 crio[2190]: time="2025-11-19T02:36:14.189104976Z" level=info msg="runtime interface created"
	Nov 19 02:36:14 pause-881232 crio[2190]: time="2025-11-19T02:36:14.189118956Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Nov 19 02:36:14 pause-881232 crio[2190]: time="2025-11-19T02:36:14.189127116Z" level=info msg="runtime interface starting up..."
	Nov 19 02:36:14 pause-881232 crio[2190]: time="2025-11-19T02:36:14.189134685Z" level=info msg="starting plugins..."
	Nov 19 02:36:14 pause-881232 crio[2190]: time="2025-11-19T02:36:14.189149524Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Nov 19 02:36:14 pause-881232 crio[2190]: time="2025-11-19T02:36:14.189475898Z" level=info msg="No systemd watchdog enabled"
	Nov 19 02:36:14 pause-881232 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	69e0618cecb03       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   13 seconds ago      Running             coredns                   0                   459dd6ae14656       coredns-66bc5c9577-9z4kk               kube-system
	511e241319afa       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   24 seconds ago      Running             kindnet-cni               0                   75efa0a1bd9ca       kindnet-stg5s                          kube-system
	320a34763e28b       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   24 seconds ago      Running             kube-proxy                0                   880f60d293fd3       kube-proxy-ttd9g                       kube-system
	e94cef8f5297a       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   34 seconds ago      Running             kube-scheduler            0                   aae1249adcf31       kube-scheduler-pause-881232            kube-system
	caa25c304f372       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   34 seconds ago      Running             etcd                      0                   1ca32618ccf89       etcd-pause-881232                      kube-system
	6036077de493e       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   34 seconds ago      Running             kube-controller-manager   0                   4b3941362fd97       kube-controller-manager-pause-881232   kube-system
	4be7b1d78923f       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   34 seconds ago      Running             kube-apiserver            0                   8608ec5edcb07       kube-apiserver-pause-881232            kube-system
	
	
	==> coredns [69e0618cecb035be983b2a6678f6e14c1b34ab986a8689b90f7c7144471b35b9] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40289 - 54371 "HINFO IN 3661780102850737674.2793788983406499339. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.862215725s
	
	
	==> describe nodes <==
	Name:               pause-881232
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-881232
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ebd49efe8b7d44f89fc96e21beffcd64dfee8277
	                    minikube.k8s.io/name=pause-881232
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T02_35_51_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 02:35:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-881232
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 02:36:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 02:36:07 +0000   Wed, 19 Nov 2025 02:35:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 02:36:07 +0000   Wed, 19 Nov 2025 02:35:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 02:36:07 +0000   Wed, 19 Nov 2025 02:35:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 02:36:07 +0000   Wed, 19 Nov 2025 02:36:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-881232
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863340Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863340Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf10fb2f940d419c1d138723691cfee8
	  System UUID:                9d88f66c-0933-418e-9b3d-09d7f7d6899b
	  Boot ID:                    b74e7f3d-91af-4c1b-82f6-1745dfd2e678
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-9z4kk                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     24s
	  kube-system                 etcd-pause-881232                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         30s
	  kube-system                 kindnet-stg5s                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      24s
	  kube-system                 kube-apiserver-pause-881232             250m (3%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-controller-manager-pause-881232    200m (2%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-ttd9g                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	  kube-system                 kube-scheduler-pause-881232             100m (1%)     0 (0%)      0 (0%)           0 (0%)         30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 24s   kube-proxy       
	  Normal  Starting                 30s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  30s   kubelet          Node pause-881232 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s   kubelet          Node pause-881232 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s   kubelet          Node pause-881232 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           25s   node-controller  Node pause-881232 event: Registered Node pause-881232 in Controller
	  Normal  NodeReady                13s   kubelet          Node pause-881232 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.087110] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023778] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.840612] kauditd_printk_skb: 47 callbacks suppressed
	[Nov19 01:58] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c6 2a 10 38 98 ce 7e af 25 93 50 8b 08 00
	[  +1.036368] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c6 2a 10 38 98 ce 7e af 25 93 50 8b 08 00
	[  +1.023894] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: c6 2a 10 38 98 ce 7e af 25 93 50 8b 08 00
	[  +1.023887] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: c6 2a 10 38 98 ce 7e af 25 93 50 8b 08 00
	[  +1.023877] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: c6 2a 10 38 98 ce 7e af 25 93 50 8b 08 00
	[  +1.023887] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: c6 2a 10 38 98 ce 7e af 25 93 50 8b 08 00
	[  +2.047754] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c6 2a 10 38 98 ce 7e af 25 93 50 8b 08 00
	[Nov19 01:59] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c6 2a 10 38 98 ce 7e af 25 93 50 8b 08 00
	[  +8.383180] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: c6 2a 10 38 98 ce 7e af 25 93 50 8b 08 00
	[ +16.382291] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: c6 2a 10 38 98 ce 7e af 25 93 50 8b 08 00
	[ +32.252687] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: c6 2a 10 38 98 ce 7e af 25 93 50 8b 08 00
	
	
	==> etcd [caa25c304f3729a515d82f1166a519c66499e8206ee80da50fdc4a55a960dd9b] <==
	{"level":"warn","ts":"2025-11-19T02:35:47.617228Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:35:47.675021Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40116","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-19T02:35:53.179524Z","caller":"traceutil/trace.go:172","msg":"trace[1869520130] transaction","detail":"{read_only:false; response_revision:285; number_of_response:1; }","duration":"103.305341ms","start":"2025-11-19T02:35:53.076195Z","end":"2025-11-19T02:35:53.179500Z","steps":["trace[1869520130] 'process raft request'  (duration: 40.732793ms)","trace[1869520130] 'compare'  (duration: 62.450636ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-19T02:35:53.446235Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"192.068832ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/default/default\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-19T02:35:53.446326Z","caller":"traceutil/trace.go:172","msg":"trace[45827625] range","detail":"{range_begin:/registry/serviceaccounts/default/default; range_end:; response_count:0; response_revision:285; }","duration":"192.202032ms","start":"2025-11-19T02:35:53.254107Z","end":"2025-11-19T02:35:53.446309Z","steps":["trace[45827625] 'agreement among raft nodes before linearized reading'  (duration: 64.13585ms)","trace[45827625] 'range keys from in-memory index tree'  (duration: 127.904845ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-19T02:35:53.446647Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"127.94266ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638356742168864193 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/pvc-protection-controller\" mod_revision:0 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/pvc-protection-controller\" value_size:130 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-11-19T02:35:53.446718Z","caller":"traceutil/trace.go:172","msg":"trace[1735801983] transaction","detail":"{read_only:false; response_revision:286; number_of_response:1; }","duration":"221.533152ms","start":"2025-11-19T02:35:53.225172Z","end":"2025-11-19T02:35:53.446706Z","steps":["trace[1735801983] 'process raft request'  (duration: 93.107878ms)","trace[1735801983] 'compare'  (duration: 127.859331ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-19T02:35:54.299526Z","caller":"traceutil/trace.go:172","msg":"trace[1800090019] transaction","detail":"{read_only:false; response_revision:291; number_of_response:1; }","duration":"123.455202ms","start":"2025-11-19T02:35:54.176055Z","end":"2025-11-19T02:35:54.299510Z","steps":["trace[1800090019] 'process raft request'  (duration: 123.332002ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-19T02:35:54.560359Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"129.440072ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638356742168864219 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/endpoint-controller\" mod_revision:0 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/endpoint-controller\" value_size:124 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-11-19T02:35:54.560608Z","caller":"traceutil/trace.go:172","msg":"trace[365308142] transaction","detail":"{read_only:false; response_revision:292; number_of_response:1; }","duration":"234.846672ms","start":"2025-11-19T02:35:54.325735Z","end":"2025-11-19T02:35:54.560582Z","steps":["trace[365308142] 'process raft request'  (duration: 105.135661ms)","trace[365308142] 'compare'  (duration: 129.326509ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-19T02:35:54.859696Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"197.462497ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638356742168864222 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/endpointslicemirroring-controller\" mod_revision:0 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/endpointslicemirroring-controller\" value_size:138 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-11-19T02:35:54.859781Z","caller":"traceutil/trace.go:172","msg":"trace[226184537] linearizableReadLoop","detail":"{readStateIndex:304; appliedIndex:303; }","duration":"112.507671ms","start":"2025-11-19T02:35:54.747262Z","end":"2025-11-19T02:35:54.859770Z","steps":["trace[226184537] 'read index received'  (duration: 26.403µs)","trace[226184537] 'applied index is now lower than readState.Index'  (duration: 112.480398ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-19T02:35:54.859850Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"112.583916ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/default/default\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-19T02:35:54.859872Z","caller":"traceutil/trace.go:172","msg":"trace[275481823] range","detail":"{range_begin:/registry/serviceaccounts/default/default; range_end:; response_count:0; response_revision:293; }","duration":"112.612001ms","start":"2025-11-19T02:35:54.747254Z","end":"2025-11-19T02:35:54.859866Z","steps":["trace[275481823] 'agreement among raft nodes before linearized reading'  (duration: 112.557056ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T02:35:54.859863Z","caller":"traceutil/trace.go:172","msg":"trace[896682309] transaction","detail":"{read_only:false; response_revision:293; number_of_response:1; }","duration":"292.221576ms","start":"2025-11-19T02:35:54.567579Z","end":"2025-11-19T02:35:54.859801Z","steps":["trace[896682309] 'process raft request'  (duration: 94.603106ms)","trace[896682309] 'compare'  (duration: 197.358639ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-19T02:35:56.050975Z","caller":"traceutil/trace.go:172","msg":"trace[473063703] linearizableReadLoop","detail":"{readStateIndex:317; appliedIndex:317; }","duration":"126.303554ms","start":"2025-11-19T02:35:55.924649Z","end":"2025-11-19T02:35:56.050953Z","steps":["trace[473063703] 'read index received'  (duration: 126.295886ms)","trace[473063703] 'applied index is now lower than readState.Index'  (duration: 6.82µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-19T02:35:56.051494Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"126.826094ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/service-account-controller\" limit:1 ","response":"range_response_count:1 size:218"}
	{"level":"info","ts":"2025-11-19T02:35:56.051549Z","caller":"traceutil/trace.go:172","msg":"trace[97861588] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/service-account-controller; range_end:; response_count:1; response_revision:306; }","duration":"126.896539ms","start":"2025-11-19T02:35:55.924640Z","end":"2025-11-19T02:35:56.051537Z","steps":["trace[97861588] 'agreement among raft nodes before linearized reading'  (duration: 126.415633ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T02:35:56.051603Z","caller":"traceutil/trace.go:172","msg":"trace[890718969] transaction","detail":"{read_only:false; response_revision:307; number_of_response:1; }","duration":"167.881988ms","start":"2025-11-19T02:35:55.883708Z","end":"2025-11-19T02:35:56.051590Z","steps":["trace[890718969] 'process raft request'  (duration: 167.302661ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T02:35:56.054211Z","caller":"traceutil/trace.go:172","msg":"trace[2055791267] transaction","detail":"{read_only:false; response_revision:308; number_of_response:1; }","duration":"169.510416ms","start":"2025-11-19T02:35:55.884686Z","end":"2025-11-19T02:35:56.054196Z","steps":["trace[2055791267] 'process raft request'  (duration: 169.418339ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T02:35:58.577680Z","caller":"traceutil/trace.go:172","msg":"trace[2131308173] transaction","detail":"{read_only:false; response_revision:377; number_of_response:1; }","duration":"115.256929ms","start":"2025-11-19T02:35:58.462406Z","end":"2025-11-19T02:35:58.577663Z","steps":["trace[2131308173] 'process raft request'  (duration: 115.046009ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T02:35:58.908145Z","caller":"traceutil/trace.go:172","msg":"trace[1063129077] transaction","detail":"{read_only:false; response_revision:378; number_of_response:1; }","duration":"324.107613ms","start":"2025-11-19T02:35:58.584022Z","end":"2025-11-19T02:35:58.908130Z","steps":["trace[1063129077] 'process raft request'  (duration: 281.719317ms)","trace[1063129077] 'compare'  (duration: 42.307138ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-19T02:35:58.908401Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-19T02:35:58.583995Z","time spent":"324.19657ms","remote":"127.0.0.1:39370","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4960,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-scheduler-pause-881232\" mod_revision:275 > success:<request_put:<key:\"/registry/pods/kube-system/kube-scheduler-pause-881232\" value_size:4898 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-scheduler-pause-881232\" > >"}
	{"level":"warn","ts":"2025-11-19T02:35:59.193179Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"129.867872ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/pause-881232\" limit:1 ","response":"range_response_count:1 size:5559"}
	{"level":"info","ts":"2025-11-19T02:35:59.193228Z","caller":"traceutil/trace.go:172","msg":"trace[2069766678] range","detail":"{range_begin:/registry/minions/pause-881232; range_end:; response_count:1; response_revision:379; }","duration":"129.926054ms","start":"2025-11-19T02:35:59.063291Z","end":"2025-11-19T02:35:59.193217Z","steps":["trace[2069766678] 'range keys from in-memory index tree'  (duration: 129.74342ms)"],"step_count":1}
	
	
	==> kernel <==
	 02:36:20 up  1:18,  0 user,  load average: 3.77, 1.89, 1.32
	Linux pause-881232 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [511e241319afaf190c8ae5fbaab63004ea6b45dbacde334fa1fa419fb575a64d] <==
	I1119 02:35:56.682590       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 02:35:56.682793       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1119 02:35:56.682916       1 main.go:148] setting mtu 1500 for CNI 
	I1119 02:35:56.682936       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 02:35:56.682944       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T02:35:56Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 02:35:56.976245       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 02:35:56.976552       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 02:35:57.076167       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 02:35:57.076326       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1119 02:35:57.276333       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1119 02:35:57.276365       1 metrics.go:72] Registering metrics
	I1119 02:35:57.276428       1 controller.go:711] "Syncing nftables rules"
	I1119 02:36:06.885565       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1119 02:36:06.885635       1 main.go:301] handling current node
	I1119 02:36:16.891519       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1119 02:36:16.891553       1 main.go:301] handling current node
	
	
	==> kube-apiserver [4be7b1d78923f1e87d3fe58c5f214a995d5ce37bb7387073eff5b7b26fc63630] <==
	I1119 02:35:48.257042       1 policy_source.go:240] refreshing policies
	E1119 02:35:48.281032       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1119 02:35:48.332419       1 controller.go:667] quota admission added evaluator for: namespaces
	I1119 02:35:48.341404       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1119 02:35:48.341967       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 02:35:48.348957       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 02:35:48.351194       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1119 02:35:48.449936       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 02:35:49.128401       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1119 02:35:49.132248       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1119 02:35:49.132264       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 02:35:49.594890       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1119 02:35:49.633013       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1119 02:35:49.735658       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1119 02:35:49.744817       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1119 02:35:49.745917       1 controller.go:667] quota admission added evaluator for: endpoints
	I1119 02:35:49.749819       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1119 02:35:50.178346       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1119 02:35:50.750043       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1119 02:35:50.759038       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1119 02:35:50.766867       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1119 02:35:55.383788       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 02:35:55.387652       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 02:35:56.060985       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1119 02:35:56.107711       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [6036077de493ec8ea2df1816e33a020594bb347d174a707c3af9a3205f778d4b] <==
	I1119 02:35:55.181718       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1119 02:35:55.181771       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1119 02:35:55.181781       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1119 02:35:55.181789       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1119 02:35:55.181911       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1119 02:35:55.184325       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1119 02:35:55.184374       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 02:35:55.184454       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 02:35:55.184507       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 02:35:55.184553       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1119 02:35:55.184573       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1119 02:35:55.189292       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-881232" podCIDRs=["10.244.0.0/24"]
	I1119 02:35:55.194100       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1119 02:35:55.201792       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1119 02:35:55.207086       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1119 02:35:55.226230       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 02:35:55.226277       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1119 02:35:55.226427       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1119 02:35:55.226550       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-881232"
	I1119 02:35:55.226599       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1119 02:35:55.228594       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1119 02:35:55.228650       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1119 02:35:55.228838       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1119 02:35:55.229979       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1119 02:36:10.228335       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [320a34763e28b598bbe46ea80965cb57c40ec57cc3b3763cdac4edbbbd143b2e] <==
	I1119 02:35:56.534399       1 server_linux.go:53] "Using iptables proxy"
	I1119 02:35:56.595494       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 02:35:56.696636       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 02:35:56.696690       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1119 02:35:56.696777       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 02:35:56.715575       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 02:35:56.715637       1 server_linux.go:132] "Using iptables Proxier"
	I1119 02:35:56.720602       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 02:35:56.720982       1 server.go:527] "Version info" version="v1.34.1"
	I1119 02:35:56.721010       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 02:35:56.722708       1 config.go:200] "Starting service config controller"
	I1119 02:35:56.722722       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 02:35:56.722819       1 config.go:106] "Starting endpoint slice config controller"
	I1119 02:35:56.722874       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 02:35:56.722904       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 02:35:56.722910       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 02:35:56.722904       1 config.go:309] "Starting node config controller"
	I1119 02:35:56.722927       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 02:35:56.722934       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 02:35:56.823607       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1119 02:35:56.823635       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1119 02:35:56.823638       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [e94cef8f5297a95085b21790340c5af52657f36386ceb1facf88e2bd446b4068] <==
	E1119 02:35:48.243307       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1119 02:35:48.246039       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1119 02:35:48.246113       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1119 02:35:48.246184       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1119 02:35:48.246201       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1119 02:35:48.246285       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1119 02:35:48.246351       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1119 02:35:48.246456       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1119 02:35:48.246526       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1119 02:35:48.246687       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1119 02:35:48.246890       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1119 02:35:48.246898       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1119 02:35:48.248101       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1119 02:35:48.248223       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1119 02:35:49.120784       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1119 02:35:49.152211       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1119 02:35:49.168326       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1119 02:35:49.178369       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1119 02:35:49.229467       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1119 02:35:49.257933       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1119 02:35:49.287978       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1119 02:35:49.349621       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1119 02:35:49.355943       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1119 02:35:49.397904       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	I1119 02:35:51.835199       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 19 02:35:51 pause-881232 kubelet[1337]: E1119 02:35:51.621122    1337 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-881232\" already exists" pod="kube-system/kube-apiserver-pause-881232"
	Nov 19 02:35:51 pause-881232 kubelet[1337]: I1119 02:35:51.653948    1337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-pause-881232" podStartSLOduration=2.653927952 podStartE2EDuration="2.653927952s" podCreationTimestamp="2025-11-19 02:35:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 02:35:51.642327217 +0000 UTC m=+1.160318788" watchObservedRunningTime="2025-11-19 02:35:51.653927952 +0000 UTC m=+1.171919524"
	Nov 19 02:35:51 pause-881232 kubelet[1337]: I1119 02:35:51.662266    1337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-pause-881232" podStartSLOduration=1.6622519420000001 podStartE2EDuration="1.662251942s" podCreationTimestamp="2025-11-19 02:35:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 02:35:51.6545189 +0000 UTC m=+1.172510481" watchObservedRunningTime="2025-11-19 02:35:51.662251942 +0000 UTC m=+1.180243508"
	Nov 19 02:35:51 pause-881232 kubelet[1337]: I1119 02:35:51.662410    1337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-pause-881232" podStartSLOduration=1.662404071 podStartE2EDuration="1.662404071s" podCreationTimestamp="2025-11-19 02:35:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 02:35:51.662313086 +0000 UTC m=+1.180304657" watchObservedRunningTime="2025-11-19 02:35:51.662404071 +0000 UTC m=+1.180395644"
	Nov 19 02:35:51 pause-881232 kubelet[1337]: I1119 02:35:51.684996    1337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-pause-881232" podStartSLOduration=1.6849768630000002 podStartE2EDuration="1.684976863s" podCreationTimestamp="2025-11-19 02:35:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 02:35:51.671137573 +0000 UTC m=+1.189129145" watchObservedRunningTime="2025-11-19 02:35:51.684976863 +0000 UTC m=+1.202968435"
	Nov 19 02:35:55 pause-881232 kubelet[1337]: I1119 02:35:55.240831    1337 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 19 02:35:55 pause-881232 kubelet[1337]: I1119 02:35:55.242482    1337 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 19 02:35:56 pause-881232 kubelet[1337]: I1119 02:35:56.208163    1337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9526834f-3202-479e-b32c-aa2a78f93a7c-kube-proxy\") pod \"kube-proxy-ttd9g\" (UID: \"9526834f-3202-479e-b32c-aa2a78f93a7c\") " pod="kube-system/kube-proxy-ttd9g"
	Nov 19 02:35:56 pause-881232 kubelet[1337]: I1119 02:35:56.208219    1337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9526834f-3202-479e-b32c-aa2a78f93a7c-lib-modules\") pod \"kube-proxy-ttd9g\" (UID: \"9526834f-3202-479e-b32c-aa2a78f93a7c\") " pod="kube-system/kube-proxy-ttd9g"
	Nov 19 02:35:56 pause-881232 kubelet[1337]: I1119 02:35:56.208243    1337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/489e7b5e-4ffa-4374-851f-1bff3268465f-lib-modules\") pod \"kindnet-stg5s\" (UID: \"489e7b5e-4ffa-4374-851f-1bff3268465f\") " pod="kube-system/kindnet-stg5s"
	Nov 19 02:35:56 pause-881232 kubelet[1337]: I1119 02:35:56.208273    1337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l26zw\" (UniqueName: \"kubernetes.io/projected/9526834f-3202-479e-b32c-aa2a78f93a7c-kube-api-access-l26zw\") pod \"kube-proxy-ttd9g\" (UID: \"9526834f-3202-479e-b32c-aa2a78f93a7c\") " pod="kube-system/kube-proxy-ttd9g"
	Nov 19 02:35:56 pause-881232 kubelet[1337]: I1119 02:35:56.208298    1337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/489e7b5e-4ffa-4374-851f-1bff3268465f-cni-cfg\") pod \"kindnet-stg5s\" (UID: \"489e7b5e-4ffa-4374-851f-1bff3268465f\") " pod="kube-system/kindnet-stg5s"
	Nov 19 02:35:56 pause-881232 kubelet[1337]: I1119 02:35:56.208317    1337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/489e7b5e-4ffa-4374-851f-1bff3268465f-xtables-lock\") pod \"kindnet-stg5s\" (UID: \"489e7b5e-4ffa-4374-851f-1bff3268465f\") " pod="kube-system/kindnet-stg5s"
	Nov 19 02:35:56 pause-881232 kubelet[1337]: I1119 02:35:56.208341    1337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jpfg\" (UniqueName: \"kubernetes.io/projected/489e7b5e-4ffa-4374-851f-1bff3268465f-kube-api-access-4jpfg\") pod \"kindnet-stg5s\" (UID: \"489e7b5e-4ffa-4374-851f-1bff3268465f\") " pod="kube-system/kindnet-stg5s"
	Nov 19 02:35:56 pause-881232 kubelet[1337]: I1119 02:35:56.208365    1337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9526834f-3202-479e-b32c-aa2a78f93a7c-xtables-lock\") pod \"kube-proxy-ttd9g\" (UID: \"9526834f-3202-479e-b32c-aa2a78f93a7c\") " pod="kube-system/kube-proxy-ttd9g"
	Nov 19 02:35:56 pause-881232 kubelet[1337]: I1119 02:35:56.629829    1337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-stg5s" podStartSLOduration=0.629693796 podStartE2EDuration="629.693796ms" podCreationTimestamp="2025-11-19 02:35:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 02:35:56.629388416 +0000 UTC m=+6.147379985" watchObservedRunningTime="2025-11-19 02:35:56.629693796 +0000 UTC m=+6.147685367"
	Nov 19 02:35:58 pause-881232 kubelet[1337]: I1119 02:35:58.909447    1337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ttd9g" podStartSLOduration=2.909407265 podStartE2EDuration="2.909407265s" podCreationTimestamp="2025-11-19 02:35:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 02:35:56.646242861 +0000 UTC m=+6.164234433" watchObservedRunningTime="2025-11-19 02:35:58.909407265 +0000 UTC m=+8.427398837"
	Nov 19 02:36:07 pause-881232 kubelet[1337]: I1119 02:36:07.284984    1337 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 19 02:36:07 pause-881232 kubelet[1337]: I1119 02:36:07.384911    1337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54mjp\" (UniqueName: \"kubernetes.io/projected/f4ad03b0-bc23-416f-9fd3-ec5b3301649f-kube-api-access-54mjp\") pod \"coredns-66bc5c9577-9z4kk\" (UID: \"f4ad03b0-bc23-416f-9fd3-ec5b3301649f\") " pod="kube-system/coredns-66bc5c9577-9z4kk"
	Nov 19 02:36:07 pause-881232 kubelet[1337]: I1119 02:36:07.384952    1337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f4ad03b0-bc23-416f-9fd3-ec5b3301649f-config-volume\") pod \"coredns-66bc5c9577-9z4kk\" (UID: \"f4ad03b0-bc23-416f-9fd3-ec5b3301649f\") " pod="kube-system/coredns-66bc5c9577-9z4kk"
	Nov 19 02:36:08 pause-881232 kubelet[1337]: I1119 02:36:08.657086    1337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-9z4kk" podStartSLOduration=12.657060769 podStartE2EDuration="12.657060769s" podCreationTimestamp="2025-11-19 02:35:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 02:36:08.656658352 +0000 UTC m=+18.174649923" watchObservedRunningTime="2025-11-19 02:36:08.657060769 +0000 UTC m=+18.175052340"
	Nov 19 02:36:17 pause-881232 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 19 02:36:17 pause-881232 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 19 02:36:17 pause-881232 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 19 02:36:17 pause-881232 systemd[1]: kubelet.service: Consumed 1.151s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-881232 -n pause-881232
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-881232 -n pause-881232: exit status 2 (337.696518ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-881232 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-881232
helpers_test.go:243: (dbg) docker inspect pause-881232:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "86f5efc2e4905e252d20151131e4f1a5433d6132a3d76139317e8d299f668d70",
	        "Created": "2025-11-19T02:35:29.052359651Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 186515,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T02:35:29.104580014Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a368e3d71517ce17114afb6c9921965419df972dd0e2d32a9973a8946f0910a3",
	        "ResolvConfPath": "/var/lib/docker/containers/86f5efc2e4905e252d20151131e4f1a5433d6132a3d76139317e8d299f668d70/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/86f5efc2e4905e252d20151131e4f1a5433d6132a3d76139317e8d299f668d70/hostname",
	        "HostsPath": "/var/lib/docker/containers/86f5efc2e4905e252d20151131e4f1a5433d6132a3d76139317e8d299f668d70/hosts",
	        "LogPath": "/var/lib/docker/containers/86f5efc2e4905e252d20151131e4f1a5433d6132a3d76139317e8d299f668d70/86f5efc2e4905e252d20151131e4f1a5433d6132a3d76139317e8d299f668d70-json.log",
	        "Name": "/pause-881232",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-881232:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-881232",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "86f5efc2e4905e252d20151131e4f1a5433d6132a3d76139317e8d299f668d70",
	                "LowerDir": "/var/lib/docker/overlay2/881305dbc525f9bbedffba7ad65358525a65ea468b24a884cd17ad87e6c69102-init/diff:/var/lib/docker/overlay2/10dcbd04dd53463a9d8dc947ef22df9efa1b3a4d07707317c3869fa39d5b22a7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/881305dbc525f9bbedffba7ad65358525a65ea468b24a884cd17ad87e6c69102/merged",
	                "UpperDir": "/var/lib/docker/overlay2/881305dbc525f9bbedffba7ad65358525a65ea468b24a884cd17ad87e6c69102/diff",
	                "WorkDir": "/var/lib/docker/overlay2/881305dbc525f9bbedffba7ad65358525a65ea468b24a884cd17ad87e6c69102/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-881232",
	                "Source": "/var/lib/docker/volumes/pause-881232/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-881232",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-881232",
	                "name.minikube.sigs.k8s.io": "pause-881232",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "925efb81d39fd592ddf0d88b17d407f4b0c7de84ddf2effde849fe77963eaf09",
	            "SandboxKey": "/var/run/docker/netns/925efb81d39f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32988"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32989"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32992"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32990"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32991"
	                    }
	                ]
	            },
	            "Networks": {
	                "pause-881232": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7c5c0c9647194999713e748225f81c7859136d53330b468e9c69564dd49c80ea",
	                    "EndpointID": "52a07fd5441349fa9a6a3573f8fc8251707a6921e47af163f8dd23455b66c708",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "82:ba:4c:0d:a4:97",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-881232",
	                        "86f5efc2e490"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-881232 -n pause-881232
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-881232 -n pause-881232: exit status 2 (332.022991ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-881232 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-881232 logs -n 25: (2.881746882s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                           ARGS                                                                                                            │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ stop    │ -p scheduled-stop-693027 --schedule 15s -v=5 --alsologtostderr                                                                                                                                                            │ scheduled-stop-693027       │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │                     │
	│ stop    │ -p scheduled-stop-693027 --schedule 15s -v=5 --alsologtostderr                                                                                                                                                            │ scheduled-stop-693027       │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │                     │
	│ stop    │ -p scheduled-stop-693027 --cancel-scheduled                                                                                                                                                                               │ scheduled-stop-693027       │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ stop    │ -p scheduled-stop-693027 --schedule 15s -v=5 --alsologtostderr                                                                                                                                                            │ scheduled-stop-693027       │ jenkins │ v1.37.0 │ 19 Nov 25 02:34 UTC │                     │
	│ stop    │ -p scheduled-stop-693027 --schedule 15s -v=5 --alsologtostderr                                                                                                                                                            │ scheduled-stop-693027       │ jenkins │ v1.37.0 │ 19 Nov 25 02:34 UTC │                     │
	│ stop    │ -p scheduled-stop-693027 --schedule 15s -v=5 --alsologtostderr                                                                                                                                                            │ scheduled-stop-693027       │ jenkins │ v1.37.0 │ 19 Nov 25 02:34 UTC │ 19 Nov 25 02:34 UTC │
	│ delete  │ -p scheduled-stop-693027                                                                                                                                                                                                  │ scheduled-stop-693027       │ jenkins │ v1.37.0 │ 19 Nov 25 02:35 UTC │ 19 Nov 25 02:35 UTC │
	│ start   │ -p insufficient-storage-609759 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio                                                                                                          │ insufficient-storage-609759 │ jenkins │ v1.37.0 │ 19 Nov 25 02:35 UTC │                     │
	│ delete  │ -p insufficient-storage-609759                                                                                                                                                                                            │ insufficient-storage-609759 │ jenkins │ v1.37.0 │ 19 Nov 25 02:35 UTC │ 19 Nov 25 02:35 UTC │
	│ start   │ -p force-systemd-env-924069 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                │ force-systemd-env-924069    │ jenkins │ v1.37.0 │ 19 Nov 25 02:35 UTC │ 19 Nov 25 02:35 UTC │
	│ start   │ -p force-systemd-flag-103780 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                               │ force-systemd-flag-103780   │ jenkins │ v1.37.0 │ 19 Nov 25 02:35 UTC │ 19 Nov 25 02:35 UTC │
	│ start   │ -p pause-881232 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                                                                                                 │ pause-881232                │ jenkins │ v1.37.0 │ 19 Nov 25 02:35 UTC │ 19 Nov 25 02:36 UTC │
	│ start   │ -p offline-crio-852644 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio                                                                                                         │ offline-crio-852644         │ jenkins │ v1.37.0 │ 19 Nov 25 02:35 UTC │                     │
	│ ssh     │ force-systemd-flag-103780 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                      │ force-systemd-flag-103780   │ jenkins │ v1.37.0 │ 19 Nov 25 02:35 UTC │ 19 Nov 25 02:35 UTC │
	│ delete  │ -p force-systemd-flag-103780                                                                                                                                                                                              │ force-systemd-flag-103780   │ jenkins │ v1.37.0 │ 19 Nov 25 02:35 UTC │ 19 Nov 25 02:35 UTC │
	│ delete  │ -p force-systemd-env-924069                                                                                                                                                                                               │ force-systemd-env-924069    │ jenkins │ v1.37.0 │ 19 Nov 25 02:35 UTC │ 19 Nov 25 02:35 UTC │
	│ start   │ -p cert-expiration-455061 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                    │ cert-expiration-455061      │ jenkins │ v1.37.0 │ 19 Nov 25 02:35 UTC │ 19 Nov 25 02:36 UTC │
	│ start   │ -p cert-options-336989 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio │ cert-options-336989         │ jenkins │ v1.37.0 │ 19 Nov 25 02:35 UTC │ 19 Nov 25 02:36 UTC │
	│ start   │ -p pause-881232 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                          │ pause-881232                │ jenkins │ v1.37.0 │ 19 Nov 25 02:36 UTC │ 19 Nov 25 02:36 UTC │
	│ ssh     │ cert-options-336989 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                               │ cert-options-336989         │ jenkins │ v1.37.0 │ 19 Nov 25 02:36 UTC │ 19 Nov 25 02:36 UTC │
	│ ssh     │ -p cert-options-336989 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                             │ cert-options-336989         │ jenkins │ v1.37.0 │ 19 Nov 25 02:36 UTC │ 19 Nov 25 02:36 UTC │
	│ delete  │ -p cert-options-336989                                                                                                                                                                                                    │ cert-options-336989         │ jenkins │ v1.37.0 │ 19 Nov 25 02:36 UTC │ 19 Nov 25 02:36 UTC │
	│ pause   │ -p pause-881232 --alsologtostderr -v=5                                                                                                                                                                                    │ pause-881232                │ jenkins │ v1.37.0 │ 19 Nov 25 02:36 UTC │                     │
	│ start   │ -p NoKubernetes-358955 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                             │ NoKubernetes-358955         │ jenkins │ v1.37.0 │ 19 Nov 25 02:36 UTC │                     │
	│ start   │ -p NoKubernetes-358955 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                     │ NoKubernetes-358955         │ jenkins │ v1.37.0 │ 19 Nov 25 02:36 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 02:36:19
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 02:36:19.347831  202600 out.go:360] Setting OutFile to fd 1 ...
	I1119 02:36:19.347915  202600 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:36:19.347919  202600 out.go:374] Setting ErrFile to fd 2...
	I1119 02:36:19.347922  202600 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:36:19.348111  202600 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-11126/.minikube/bin
	I1119 02:36:19.348506  202600 out.go:368] Setting JSON to false
	I1119 02:36:19.349506  202600 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4726,"bootTime":1763515053,"procs":298,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 02:36:19.349592  202600 start.go:143] virtualization: kvm guest
	I1119 02:36:19.352269  202600 out.go:179] * [NoKubernetes-358955] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1119 02:36:19.353565  202600 out.go:179]   - MINIKUBE_LOCATION=21924
	I1119 02:36:19.353583  202600 notify.go:221] Checking for updates...
	I1119 02:36:19.355891  202600 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 02:36:19.356928  202600 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21924-11126/kubeconfig
	I1119 02:36:19.357995  202600 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-11126/.minikube
	I1119 02:36:19.359018  202600 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1119 02:36:19.360011  202600 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 02:36:19.361407  202600 config.go:182] Loaded profile config "cert-expiration-455061": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:36:19.361531  202600 config.go:182] Loaded profile config "offline-crio-852644": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:36:19.361634  202600 config.go:182] Loaded profile config "pause-881232": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:36:19.361723  202600 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 02:36:19.384063  202600 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1119 02:36:19.384150  202600 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 02:36:19.439757  202600 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-19 02:36:19.430072194 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652060160 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 02:36:19.439858  202600 docker.go:319] overlay module found
	I1119 02:36:19.441507  202600 out.go:179] * Using the docker driver based on user configuration
	I1119 02:36:19.442579  202600 start.go:309] selected driver: docker
	I1119 02:36:19.442593  202600 start.go:930] validating driver "docker" against <nil>
	I1119 02:36:19.442613  202600 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 02:36:19.443154  202600 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 02:36:19.499374  202600 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-19 02:36:19.48921501 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652060160 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 02:36:19.499555  202600 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1119 02:36:19.499743  202600 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1119 02:36:19.501389  202600 out.go:179] * Using Docker driver with root privileges
	I1119 02:36:19.502453  202600 cni.go:84] Creating CNI manager for ""
	I1119 02:36:19.502532  202600 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 02:36:19.502548  202600 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1119 02:36:19.502622  202600 start.go:353] cluster config:
	{Name:NoKubernetes-358955 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:NoKubernetes-358955 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:36:19.503706  202600 out.go:179] * Starting "NoKubernetes-358955" primary control-plane node in "NoKubernetes-358955" cluster
	I1119 02:36:19.504878  202600 cache.go:134] Beginning downloading kic base image for docker with crio
	I1119 02:36:19.506025  202600 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1119 02:36:19.507010  202600 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 02:36:19.507040  202600 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21924-11126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1119 02:36:19.507056  202600 cache.go:65] Caching tarball of preloaded images
	I1119 02:36:19.507110  202600 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1119 02:36:19.507135  202600 preload.go:238] Found /home/jenkins/minikube-integration/21924-11126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1119 02:36:19.507150  202600 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1119 02:36:19.507252  202600 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/NoKubernetes-358955/config.json ...
	I1119 02:36:19.507276  202600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/NoKubernetes-358955/config.json: {Name:mk328c666f5ac3d02e64668e411b2ecd63ab7065 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:36:19.526522  202600 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1119 02:36:19.526543  202600 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1119 02:36:19.526559  202600 cache.go:243] Successfully downloaded all kic artifacts
	I1119 02:36:19.526582  202600 start.go:360] acquireMachinesLock for NoKubernetes-358955: {Name:mkdd15688f89c119a76a0016afd599ec23aa4449 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 02:36:19.526659  202600 start.go:364] duration metric: took 64.106µs to acquireMachinesLock for "NoKubernetes-358955"
	I1119 02:36:19.526680  202600 start.go:93] Provisioning new machine with config: &{Name:NoKubernetes-358955 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:NoKubernetes-358955 Namespace:default APIServerHAVIP: APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 02:36:19.526731  202600 start.go:125] createHost starting for "" (driver="docker")
	W1119 02:36:17.562034  184492 node_ready.go:57] node "offline-crio-852644" has "Ready":"False" status (will retry)
	W1119 02:36:19.562506  184492 node_ready.go:57] node "offline-crio-852644" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Nov 19 02:36:14 pause-881232 crio[2190]: time="2025-11-19T02:36:14.099376209Z" level=info msg="RDT not available in the host system"
	Nov 19 02:36:14 pause-881232 crio[2190]: time="2025-11-19T02:36:14.099388707Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Nov 19 02:36:14 pause-881232 crio[2190]: time="2025-11-19T02:36:14.100278129Z" level=info msg="Conmon does support the --sync option"
	Nov 19 02:36:14 pause-881232 crio[2190]: time="2025-11-19T02:36:14.10030337Z" level=info msg="Conmon does support the --log-global-size-max option"
	Nov 19 02:36:14 pause-881232 crio[2190]: time="2025-11-19T02:36:14.100321045Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Nov 19 02:36:14 pause-881232 crio[2190]: time="2025-11-19T02:36:14.101183355Z" level=info msg="Conmon does support the --sync option"
	Nov 19 02:36:14 pause-881232 crio[2190]: time="2025-11-19T02:36:14.101204599Z" level=info msg="Conmon does support the --log-global-size-max option"
	Nov 19 02:36:14 pause-881232 crio[2190]: time="2025-11-19T02:36:14.105275148Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 02:36:14 pause-881232 crio[2190]: time="2025-11-19T02:36:14.105298117Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 19 02:36:14 pause-881232 crio[2190]: time="2025-11-19T02:36:14.106098454Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/
cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"
/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Nov 19 02:36:14 pause-881232 crio[2190]: time="2025-11-19T02:36:14.106592732Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Nov 19 02:36:14 pause-881232 crio[2190]: time="2025-11-19T02:36:14.106651501Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Nov 19 02:36:14 pause-881232 crio[2190]: time="2025-11-19T02:36:14.188326151Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-9z4kk Namespace:kube-system ID:459dd6ae14656b22173e4ca65f1b3bfbe236e25c462646b70be820878c3c99b9 UID:f4ad03b0-bc23-416f-9fd3-ec5b3301649f NetNS:/var/run/netns/52135fa1-9602-4fc3-b1d2-dab37944dab0 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000132980}] Aliases:map[]}"
	Nov 19 02:36:14 pause-881232 crio[2190]: time="2025-11-19T02:36:14.188514056Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-9z4kk for CNI network kindnet (type=ptp)"
	Nov 19 02:36:14 pause-881232 crio[2190]: time="2025-11-19T02:36:14.188935861Z" level=info msg="Registered SIGHUP reload watcher"
	Nov 19 02:36:14 pause-881232 crio[2190]: time="2025-11-19T02:36:14.188955035Z" level=info msg="Starting seccomp notifier watcher"
	Nov 19 02:36:14 pause-881232 crio[2190]: time="2025-11-19T02:36:14.188994688Z" level=info msg="Create NRI interface"
	Nov 19 02:36:14 pause-881232 crio[2190]: time="2025-11-19T02:36:14.189096395Z" level=info msg="built-in NRI default validator is disabled"
	Nov 19 02:36:14 pause-881232 crio[2190]: time="2025-11-19T02:36:14.189104976Z" level=info msg="runtime interface created"
	Nov 19 02:36:14 pause-881232 crio[2190]: time="2025-11-19T02:36:14.189118956Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Nov 19 02:36:14 pause-881232 crio[2190]: time="2025-11-19T02:36:14.189127116Z" level=info msg="runtime interface starting up..."
	Nov 19 02:36:14 pause-881232 crio[2190]: time="2025-11-19T02:36:14.189134685Z" level=info msg="starting plugins..."
	Nov 19 02:36:14 pause-881232 crio[2190]: time="2025-11-19T02:36:14.189149524Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Nov 19 02:36:14 pause-881232 crio[2190]: time="2025-11-19T02:36:14.189475898Z" level=info msg="No systemd watchdog enabled"
	Nov 19 02:36:14 pause-881232 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	69e0618cecb03       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   14 seconds ago      Running             coredns                   0                   459dd6ae14656       coredns-66bc5c9577-9z4kk               kube-system
	511e241319afa       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   26 seconds ago      Running             kindnet-cni               0                   75efa0a1bd9ca       kindnet-stg5s                          kube-system
	320a34763e28b       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   26 seconds ago      Running             kube-proxy                0                   880f60d293fd3       kube-proxy-ttd9g                       kube-system
	e94cef8f5297a       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   36 seconds ago      Running             kube-scheduler            0                   aae1249adcf31       kube-scheduler-pause-881232            kube-system
	caa25c304f372       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   36 seconds ago      Running             etcd                      0                   1ca32618ccf89       etcd-pause-881232                      kube-system
	6036077de493e       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   36 seconds ago      Running             kube-controller-manager   0                   4b3941362fd97       kube-controller-manager-pause-881232   kube-system
	4be7b1d78923f       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   36 seconds ago      Running             kube-apiserver            0                   8608ec5edcb07       kube-apiserver-pause-881232            kube-system
	
	
	==> coredns [69e0618cecb035be983b2a6678f6e14c1b34ab986a8689b90f7c7144471b35b9] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40289 - 54371 "HINFO IN 3661780102850737674.2793788983406499339. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.862215725s
	
	
	==> describe nodes <==
	Name:               pause-881232
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-881232
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ebd49efe8b7d44f89fc96e21beffcd64dfee8277
	                    minikube.k8s.io/name=pause-881232
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T02_35_51_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 02:35:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-881232
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 02:36:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 02:36:07 +0000   Wed, 19 Nov 2025 02:35:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 02:36:07 +0000   Wed, 19 Nov 2025 02:35:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 02:36:07 +0000   Wed, 19 Nov 2025 02:35:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 02:36:07 +0000   Wed, 19 Nov 2025 02:36:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-881232
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863340Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863340Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf10fb2f940d419c1d138723691cfee8
	  System UUID:                9d88f66c-0933-418e-9b3d-09d7f7d6899b
	  Boot ID:                    b74e7f3d-91af-4c1b-82f6-1745dfd2e678
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-9z4kk                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     27s
	  kube-system                 etcd-pause-881232                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         33s
	  kube-system                 kindnet-stg5s                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-pause-881232             250m (3%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-controller-manager-pause-881232    200m (2%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-proxy-ttd9g                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-pause-881232             100m (1%)     0 (0%)      0 (0%)           0 (0%)         33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 26s   kube-proxy       
	  Normal  Starting                 33s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  33s   kubelet          Node pause-881232 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    33s   kubelet          Node pause-881232 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     33s   kubelet          Node pause-881232 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           28s   node-controller  Node pause-881232 event: Registered Node pause-881232 in Controller
	  Normal  NodeReady                16s   kubelet          Node pause-881232 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.087110] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023778] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.840612] kauditd_printk_skb: 47 callbacks suppressed
	[Nov19 01:58] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c6 2a 10 38 98 ce 7e af 25 93 50 8b 08 00
	[  +1.036368] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c6 2a 10 38 98 ce 7e af 25 93 50 8b 08 00
	[  +1.023894] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: c6 2a 10 38 98 ce 7e af 25 93 50 8b 08 00
	[  +1.023887] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: c6 2a 10 38 98 ce 7e af 25 93 50 8b 08 00
	[  +1.023877] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: c6 2a 10 38 98 ce 7e af 25 93 50 8b 08 00
	[  +1.023887] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: c6 2a 10 38 98 ce 7e af 25 93 50 8b 08 00
	[  +2.047754] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c6 2a 10 38 98 ce 7e af 25 93 50 8b 08 00
	[Nov19 01:59] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c6 2a 10 38 98 ce 7e af 25 93 50 8b 08 00
	[  +8.383180] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: c6 2a 10 38 98 ce 7e af 25 93 50 8b 08 00
	[ +16.382291] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: c6 2a 10 38 98 ce 7e af 25 93 50 8b 08 00
	[ +32.252687] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: c6 2a 10 38 98 ce 7e af 25 93 50 8b 08 00
	
	
	==> etcd [caa25c304f3729a515d82f1166a519c66499e8206ee80da50fdc4a55a960dd9b] <==
	{"level":"warn","ts":"2025-11-19T02:35:47.617228Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:35:47.675021Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40116","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-19T02:35:53.179524Z","caller":"traceutil/trace.go:172","msg":"trace[1869520130] transaction","detail":"{read_only:false; response_revision:285; number_of_response:1; }","duration":"103.305341ms","start":"2025-11-19T02:35:53.076195Z","end":"2025-11-19T02:35:53.179500Z","steps":["trace[1869520130] 'process raft request'  (duration: 40.732793ms)","trace[1869520130] 'compare'  (duration: 62.450636ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-19T02:35:53.446235Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"192.068832ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/default/default\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-19T02:35:53.446326Z","caller":"traceutil/trace.go:172","msg":"trace[45827625] range","detail":"{range_begin:/registry/serviceaccounts/default/default; range_end:; response_count:0; response_revision:285; }","duration":"192.202032ms","start":"2025-11-19T02:35:53.254107Z","end":"2025-11-19T02:35:53.446309Z","steps":["trace[45827625] 'agreement among raft nodes before linearized reading'  (duration: 64.13585ms)","trace[45827625] 'range keys from in-memory index tree'  (duration: 127.904845ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-19T02:35:53.446647Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"127.94266ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638356742168864193 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/pvc-protection-controller\" mod_revision:0 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/pvc-protection-controller\" value_size:130 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-11-19T02:35:53.446718Z","caller":"traceutil/trace.go:172","msg":"trace[1735801983] transaction","detail":"{read_only:false; response_revision:286; number_of_response:1; }","duration":"221.533152ms","start":"2025-11-19T02:35:53.225172Z","end":"2025-11-19T02:35:53.446706Z","steps":["trace[1735801983] 'process raft request'  (duration: 93.107878ms)","trace[1735801983] 'compare'  (duration: 127.859331ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-19T02:35:54.299526Z","caller":"traceutil/trace.go:172","msg":"trace[1800090019] transaction","detail":"{read_only:false; response_revision:291; number_of_response:1; }","duration":"123.455202ms","start":"2025-11-19T02:35:54.176055Z","end":"2025-11-19T02:35:54.299510Z","steps":["trace[1800090019] 'process raft request'  (duration: 123.332002ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-19T02:35:54.560359Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"129.440072ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638356742168864219 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/endpoint-controller\" mod_revision:0 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/endpoint-controller\" value_size:124 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-11-19T02:35:54.560608Z","caller":"traceutil/trace.go:172","msg":"trace[365308142] transaction","detail":"{read_only:false; response_revision:292; number_of_response:1; }","duration":"234.846672ms","start":"2025-11-19T02:35:54.325735Z","end":"2025-11-19T02:35:54.560582Z","steps":["trace[365308142] 'process raft request'  (duration: 105.135661ms)","trace[365308142] 'compare'  (duration: 129.326509ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-19T02:35:54.859696Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"197.462497ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638356742168864222 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/endpointslicemirroring-controller\" mod_revision:0 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/endpointslicemirroring-controller\" value_size:138 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-11-19T02:35:54.859781Z","caller":"traceutil/trace.go:172","msg":"trace[226184537] linearizableReadLoop","detail":"{readStateIndex:304; appliedIndex:303; }","duration":"112.507671ms","start":"2025-11-19T02:35:54.747262Z","end":"2025-11-19T02:35:54.859770Z","steps":["trace[226184537] 'read index received'  (duration: 26.403µs)","trace[226184537] 'applied index is now lower than readState.Index'  (duration: 112.480398ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-19T02:35:54.859850Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"112.583916ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/default/default\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-19T02:35:54.859872Z","caller":"traceutil/trace.go:172","msg":"trace[275481823] range","detail":"{range_begin:/registry/serviceaccounts/default/default; range_end:; response_count:0; response_revision:293; }","duration":"112.612001ms","start":"2025-11-19T02:35:54.747254Z","end":"2025-11-19T02:35:54.859866Z","steps":["trace[275481823] 'agreement among raft nodes before linearized reading'  (duration: 112.557056ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T02:35:54.859863Z","caller":"traceutil/trace.go:172","msg":"trace[896682309] transaction","detail":"{read_only:false; response_revision:293; number_of_response:1; }","duration":"292.221576ms","start":"2025-11-19T02:35:54.567579Z","end":"2025-11-19T02:35:54.859801Z","steps":["trace[896682309] 'process raft request'  (duration: 94.603106ms)","trace[896682309] 'compare'  (duration: 197.358639ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-19T02:35:56.050975Z","caller":"traceutil/trace.go:172","msg":"trace[473063703] linearizableReadLoop","detail":"{readStateIndex:317; appliedIndex:317; }","duration":"126.303554ms","start":"2025-11-19T02:35:55.924649Z","end":"2025-11-19T02:35:56.050953Z","steps":["trace[473063703] 'read index received'  (duration: 126.295886ms)","trace[473063703] 'applied index is now lower than readState.Index'  (duration: 6.82µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-19T02:35:56.051494Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"126.826094ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/service-account-controller\" limit:1 ","response":"range_response_count:1 size:218"}
	{"level":"info","ts":"2025-11-19T02:35:56.051549Z","caller":"traceutil/trace.go:172","msg":"trace[97861588] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/service-account-controller; range_end:; response_count:1; response_revision:306; }","duration":"126.896539ms","start":"2025-11-19T02:35:55.924640Z","end":"2025-11-19T02:35:56.051537Z","steps":["trace[97861588] 'agreement among raft nodes before linearized reading'  (duration: 126.415633ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T02:35:56.051603Z","caller":"traceutil/trace.go:172","msg":"trace[890718969] transaction","detail":"{read_only:false; response_revision:307; number_of_response:1; }","duration":"167.881988ms","start":"2025-11-19T02:35:55.883708Z","end":"2025-11-19T02:35:56.051590Z","steps":["trace[890718969] 'process raft request'  (duration: 167.302661ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T02:35:56.054211Z","caller":"traceutil/trace.go:172","msg":"trace[2055791267] transaction","detail":"{read_only:false; response_revision:308; number_of_response:1; }","duration":"169.510416ms","start":"2025-11-19T02:35:55.884686Z","end":"2025-11-19T02:35:56.054196Z","steps":["trace[2055791267] 'process raft request'  (duration: 169.418339ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T02:35:58.577680Z","caller":"traceutil/trace.go:172","msg":"trace[2131308173] transaction","detail":"{read_only:false; response_revision:377; number_of_response:1; }","duration":"115.256929ms","start":"2025-11-19T02:35:58.462406Z","end":"2025-11-19T02:35:58.577663Z","steps":["trace[2131308173] 'process raft request'  (duration: 115.046009ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T02:35:58.908145Z","caller":"traceutil/trace.go:172","msg":"trace[1063129077] transaction","detail":"{read_only:false; response_revision:378; number_of_response:1; }","duration":"324.107613ms","start":"2025-11-19T02:35:58.584022Z","end":"2025-11-19T02:35:58.908130Z","steps":["trace[1063129077] 'process raft request'  (duration: 281.719317ms)","trace[1063129077] 'compare'  (duration: 42.307138ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-19T02:35:58.908401Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-19T02:35:58.583995Z","time spent":"324.19657ms","remote":"127.0.0.1:39370","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4960,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-scheduler-pause-881232\" mod_revision:275 > success:<request_put:<key:\"/registry/pods/kube-system/kube-scheduler-pause-881232\" value_size:4898 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-scheduler-pause-881232\" > >"}
	{"level":"warn","ts":"2025-11-19T02:35:59.193179Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"129.867872ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/pause-881232\" limit:1 ","response":"range_response_count:1 size:5559"}
	{"level":"info","ts":"2025-11-19T02:35:59.193228Z","caller":"traceutil/trace.go:172","msg":"trace[2069766678] range","detail":"{range_begin:/registry/minions/pause-881232; range_end:; response_count:1; response_revision:379; }","duration":"129.926054ms","start":"2025-11-19T02:35:59.063291Z","end":"2025-11-19T02:35:59.193217Z","steps":["trace[2069766678] 'range keys from in-memory index tree'  (duration: 129.74342ms)"],"step_count":1}
	
	
	==> kernel <==
	 02:36:24 up  1:18,  0 user,  load average: 3.77, 1.89, 1.32
	Linux pause-881232 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [511e241319afaf190c8ae5fbaab63004ea6b45dbacde334fa1fa419fb575a64d] <==
	I1119 02:35:56.682590       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 02:35:56.682793       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1119 02:35:56.682916       1 main.go:148] setting mtu 1500 for CNI 
	I1119 02:35:56.682936       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 02:35:56.682944       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T02:35:56Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 02:35:56.976245       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 02:35:56.976552       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 02:35:57.076167       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 02:35:57.076326       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1119 02:35:57.276333       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1119 02:35:57.276365       1 metrics.go:72] Registering metrics
	I1119 02:35:57.276428       1 controller.go:711] "Syncing nftables rules"
	I1119 02:36:06.885565       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1119 02:36:06.885635       1 main.go:301] handling current node
	I1119 02:36:16.891519       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1119 02:36:16.891553       1 main.go:301] handling current node
	
	
	==> kube-apiserver [4be7b1d78923f1e87d3fe58c5f214a995d5ce37bb7387073eff5b7b26fc63630] <==
	I1119 02:35:48.257042       1 policy_source.go:240] refreshing policies
	E1119 02:35:48.281032       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1119 02:35:48.332419       1 controller.go:667] quota admission added evaluator for: namespaces
	I1119 02:35:48.341404       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1119 02:35:48.341967       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 02:35:48.348957       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 02:35:48.351194       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1119 02:35:48.449936       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 02:35:49.128401       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1119 02:35:49.132248       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1119 02:35:49.132264       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 02:35:49.594890       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1119 02:35:49.633013       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1119 02:35:49.735658       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1119 02:35:49.744817       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1119 02:35:49.745917       1 controller.go:667] quota admission added evaluator for: endpoints
	I1119 02:35:49.749819       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1119 02:35:50.178346       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1119 02:35:50.750043       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1119 02:35:50.759038       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1119 02:35:50.766867       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1119 02:35:55.383788       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 02:35:55.387652       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 02:35:56.060985       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1119 02:35:56.107711       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [6036077de493ec8ea2df1816e33a020594bb347d174a707c3af9a3205f778d4b] <==
	I1119 02:35:55.181718       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1119 02:35:55.181771       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1119 02:35:55.181781       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1119 02:35:55.181789       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1119 02:35:55.181911       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1119 02:35:55.184325       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1119 02:35:55.184374       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 02:35:55.184454       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 02:35:55.184507       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 02:35:55.184553       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1119 02:35:55.184573       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1119 02:35:55.189292       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-881232" podCIDRs=["10.244.0.0/24"]
	I1119 02:35:55.194100       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1119 02:35:55.201792       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1119 02:35:55.207086       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1119 02:35:55.226230       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 02:35:55.226277       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1119 02:35:55.226427       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1119 02:35:55.226550       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-881232"
	I1119 02:35:55.226599       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1119 02:35:55.228594       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1119 02:35:55.228650       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1119 02:35:55.228838       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1119 02:35:55.229979       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1119 02:36:10.228335       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [320a34763e28b598bbe46ea80965cb57c40ec57cc3b3763cdac4edbbbd143b2e] <==
	I1119 02:35:56.534399       1 server_linux.go:53] "Using iptables proxy"
	I1119 02:35:56.595494       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 02:35:56.696636       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 02:35:56.696690       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1119 02:35:56.696777       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 02:35:56.715575       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 02:35:56.715637       1 server_linux.go:132] "Using iptables Proxier"
	I1119 02:35:56.720602       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 02:35:56.720982       1 server.go:527] "Version info" version="v1.34.1"
	I1119 02:35:56.721010       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 02:35:56.722708       1 config.go:200] "Starting service config controller"
	I1119 02:35:56.722722       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 02:35:56.722819       1 config.go:106] "Starting endpoint slice config controller"
	I1119 02:35:56.722874       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 02:35:56.722904       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 02:35:56.722910       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 02:35:56.722904       1 config.go:309] "Starting node config controller"
	I1119 02:35:56.722927       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 02:35:56.722934       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 02:35:56.823607       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1119 02:35:56.823635       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1119 02:35:56.823638       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [e94cef8f5297a95085b21790340c5af52657f36386ceb1facf88e2bd446b4068] <==
	E1119 02:35:48.243307       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1119 02:35:48.246039       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1119 02:35:48.246113       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1119 02:35:48.246184       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1119 02:35:48.246201       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1119 02:35:48.246285       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1119 02:35:48.246351       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1119 02:35:48.246456       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1119 02:35:48.246526       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1119 02:35:48.246687       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1119 02:35:48.246890       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1119 02:35:48.246898       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1119 02:35:48.248101       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1119 02:35:48.248223       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1119 02:35:49.120784       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1119 02:35:49.152211       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1119 02:35:49.168326       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1119 02:35:49.178369       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1119 02:35:49.229467       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1119 02:35:49.257933       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1119 02:35:49.287978       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1119 02:35:49.349621       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1119 02:35:49.355943       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1119 02:35:49.397904       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	I1119 02:35:51.835199       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 19 02:35:51 pause-881232 kubelet[1337]: E1119 02:35:51.621122    1337 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-881232\" already exists" pod="kube-system/kube-apiserver-pause-881232"
	Nov 19 02:35:51 pause-881232 kubelet[1337]: I1119 02:35:51.653948    1337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-pause-881232" podStartSLOduration=2.653927952 podStartE2EDuration="2.653927952s" podCreationTimestamp="2025-11-19 02:35:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 02:35:51.642327217 +0000 UTC m=+1.160318788" watchObservedRunningTime="2025-11-19 02:35:51.653927952 +0000 UTC m=+1.171919524"
	Nov 19 02:35:51 pause-881232 kubelet[1337]: I1119 02:35:51.662266    1337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-pause-881232" podStartSLOduration=1.6622519420000001 podStartE2EDuration="1.662251942s" podCreationTimestamp="2025-11-19 02:35:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 02:35:51.6545189 +0000 UTC m=+1.172510481" watchObservedRunningTime="2025-11-19 02:35:51.662251942 +0000 UTC m=+1.180243508"
	Nov 19 02:35:51 pause-881232 kubelet[1337]: I1119 02:35:51.662410    1337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-pause-881232" podStartSLOduration=1.662404071 podStartE2EDuration="1.662404071s" podCreationTimestamp="2025-11-19 02:35:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 02:35:51.662313086 +0000 UTC m=+1.180304657" watchObservedRunningTime="2025-11-19 02:35:51.662404071 +0000 UTC m=+1.180395644"
	Nov 19 02:35:51 pause-881232 kubelet[1337]: I1119 02:35:51.684996    1337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-pause-881232" podStartSLOduration=1.6849768630000002 podStartE2EDuration="1.684976863s" podCreationTimestamp="2025-11-19 02:35:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 02:35:51.671137573 +0000 UTC m=+1.189129145" watchObservedRunningTime="2025-11-19 02:35:51.684976863 +0000 UTC m=+1.202968435"
	Nov 19 02:35:55 pause-881232 kubelet[1337]: I1119 02:35:55.240831    1337 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 19 02:35:55 pause-881232 kubelet[1337]: I1119 02:35:55.242482    1337 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 19 02:35:56 pause-881232 kubelet[1337]: I1119 02:35:56.208163    1337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9526834f-3202-479e-b32c-aa2a78f93a7c-kube-proxy\") pod \"kube-proxy-ttd9g\" (UID: \"9526834f-3202-479e-b32c-aa2a78f93a7c\") " pod="kube-system/kube-proxy-ttd9g"
	Nov 19 02:35:56 pause-881232 kubelet[1337]: I1119 02:35:56.208219    1337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9526834f-3202-479e-b32c-aa2a78f93a7c-lib-modules\") pod \"kube-proxy-ttd9g\" (UID: \"9526834f-3202-479e-b32c-aa2a78f93a7c\") " pod="kube-system/kube-proxy-ttd9g"
	Nov 19 02:35:56 pause-881232 kubelet[1337]: I1119 02:35:56.208243    1337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/489e7b5e-4ffa-4374-851f-1bff3268465f-lib-modules\") pod \"kindnet-stg5s\" (UID: \"489e7b5e-4ffa-4374-851f-1bff3268465f\") " pod="kube-system/kindnet-stg5s"
	Nov 19 02:35:56 pause-881232 kubelet[1337]: I1119 02:35:56.208273    1337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l26zw\" (UniqueName: \"kubernetes.io/projected/9526834f-3202-479e-b32c-aa2a78f93a7c-kube-api-access-l26zw\") pod \"kube-proxy-ttd9g\" (UID: \"9526834f-3202-479e-b32c-aa2a78f93a7c\") " pod="kube-system/kube-proxy-ttd9g"
	Nov 19 02:35:56 pause-881232 kubelet[1337]: I1119 02:35:56.208298    1337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/489e7b5e-4ffa-4374-851f-1bff3268465f-cni-cfg\") pod \"kindnet-stg5s\" (UID: \"489e7b5e-4ffa-4374-851f-1bff3268465f\") " pod="kube-system/kindnet-stg5s"
	Nov 19 02:35:56 pause-881232 kubelet[1337]: I1119 02:35:56.208317    1337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/489e7b5e-4ffa-4374-851f-1bff3268465f-xtables-lock\") pod \"kindnet-stg5s\" (UID: \"489e7b5e-4ffa-4374-851f-1bff3268465f\") " pod="kube-system/kindnet-stg5s"
	Nov 19 02:35:56 pause-881232 kubelet[1337]: I1119 02:35:56.208341    1337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jpfg\" (UniqueName: \"kubernetes.io/projected/489e7b5e-4ffa-4374-851f-1bff3268465f-kube-api-access-4jpfg\") pod \"kindnet-stg5s\" (UID: \"489e7b5e-4ffa-4374-851f-1bff3268465f\") " pod="kube-system/kindnet-stg5s"
	Nov 19 02:35:56 pause-881232 kubelet[1337]: I1119 02:35:56.208365    1337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9526834f-3202-479e-b32c-aa2a78f93a7c-xtables-lock\") pod \"kube-proxy-ttd9g\" (UID: \"9526834f-3202-479e-b32c-aa2a78f93a7c\") " pod="kube-system/kube-proxy-ttd9g"
	Nov 19 02:35:56 pause-881232 kubelet[1337]: I1119 02:35:56.629829    1337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-stg5s" podStartSLOduration=0.629693796 podStartE2EDuration="629.693796ms" podCreationTimestamp="2025-11-19 02:35:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 02:35:56.629388416 +0000 UTC m=+6.147379985" watchObservedRunningTime="2025-11-19 02:35:56.629693796 +0000 UTC m=+6.147685367"
	Nov 19 02:35:58 pause-881232 kubelet[1337]: I1119 02:35:58.909447    1337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ttd9g" podStartSLOduration=2.909407265 podStartE2EDuration="2.909407265s" podCreationTimestamp="2025-11-19 02:35:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 02:35:56.646242861 +0000 UTC m=+6.164234433" watchObservedRunningTime="2025-11-19 02:35:58.909407265 +0000 UTC m=+8.427398837"
	Nov 19 02:36:07 pause-881232 kubelet[1337]: I1119 02:36:07.284984    1337 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 19 02:36:07 pause-881232 kubelet[1337]: I1119 02:36:07.384911    1337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54mjp\" (UniqueName: \"kubernetes.io/projected/f4ad03b0-bc23-416f-9fd3-ec5b3301649f-kube-api-access-54mjp\") pod \"coredns-66bc5c9577-9z4kk\" (UID: \"f4ad03b0-bc23-416f-9fd3-ec5b3301649f\") " pod="kube-system/coredns-66bc5c9577-9z4kk"
	Nov 19 02:36:07 pause-881232 kubelet[1337]: I1119 02:36:07.384952    1337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f4ad03b0-bc23-416f-9fd3-ec5b3301649f-config-volume\") pod \"coredns-66bc5c9577-9z4kk\" (UID: \"f4ad03b0-bc23-416f-9fd3-ec5b3301649f\") " pod="kube-system/coredns-66bc5c9577-9z4kk"
	Nov 19 02:36:08 pause-881232 kubelet[1337]: I1119 02:36:08.657086    1337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-9z4kk" podStartSLOduration=12.657060769 podStartE2EDuration="12.657060769s" podCreationTimestamp="2025-11-19 02:35:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 02:36:08.656658352 +0000 UTC m=+18.174649923" watchObservedRunningTime="2025-11-19 02:36:08.657060769 +0000 UTC m=+18.175052340"
	Nov 19 02:36:17 pause-881232 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 19 02:36:17 pause-881232 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 19 02:36:17 pause-881232 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 19 02:36:17 pause-881232 systemd[1]: kubelet.service: Consumed 1.151s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-881232 -n pause-881232
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-881232 -n pause-881232: exit status 2 (330.86451ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-881232 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (7.85s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-987573 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-987573 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (245.041153ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:43:18Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-987573 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-987573 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-987573 describe deploy/metrics-server -n kube-system: exit status 1 (57.277427ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-987573 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-987573
helpers_test.go:243: (dbg) docker inspect old-k8s-version-987573:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ae750ceb959b3b185a87925748a821dda02d033dac6848c57030347d6edbda71",
	        "Created": "2025-11-19T02:42:22.008498904Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 293683,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T02:42:22.044031966Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a368e3d71517ce17114afb6c9921965419df972dd0e2d32a9973a8946f0910a3",
	        "ResolvConfPath": "/var/lib/docker/containers/ae750ceb959b3b185a87925748a821dda02d033dac6848c57030347d6edbda71/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ae750ceb959b3b185a87925748a821dda02d033dac6848c57030347d6edbda71/hostname",
	        "HostsPath": "/var/lib/docker/containers/ae750ceb959b3b185a87925748a821dda02d033dac6848c57030347d6edbda71/hosts",
	        "LogPath": "/var/lib/docker/containers/ae750ceb959b3b185a87925748a821dda02d033dac6848c57030347d6edbda71/ae750ceb959b3b185a87925748a821dda02d033dac6848c57030347d6edbda71-json.log",
	        "Name": "/old-k8s-version-987573",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-987573:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-987573",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ae750ceb959b3b185a87925748a821dda02d033dac6848c57030347d6edbda71",
	                "LowerDir": "/var/lib/docker/overlay2/67a2eee882f8978eae59dbc9ac2c8a6169ceac2cd04882ad01ae0421935fe202-init/diff:/var/lib/docker/overlay2/10dcbd04dd53463a9d8dc947ef22df9efa1b3a4d07707317c3869fa39d5b22a7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/67a2eee882f8978eae59dbc9ac2c8a6169ceac2cd04882ad01ae0421935fe202/merged",
	                "UpperDir": "/var/lib/docker/overlay2/67a2eee882f8978eae59dbc9ac2c8a6169ceac2cd04882ad01ae0421935fe202/diff",
	                "WorkDir": "/var/lib/docker/overlay2/67a2eee882f8978eae59dbc9ac2c8a6169ceac2cd04882ad01ae0421935fe202/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-987573",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-987573/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-987573",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-987573",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-987573",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "528365a85980d29131f24eaf69b9e443b15a164561dc91f5c4a3201f97a5e7bc",
	            "SandboxKey": "/var/run/docker/netns/528365a85980",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33092"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-987573": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4d7fb52c0aef23ee545f7da5c971e8a8676f2221b08ae8d87614b0f88b577986",
	                    "EndpointID": "d6f4d4948fccb6afa6c2ff6ae26614f1f56037851618f87ee0ca596641d25e3e",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "aa:cc:c6:42:76:50",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-987573",
	                        "ae750ceb959b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-987573 -n old-k8s-version-987573
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-987573 logs -n 25
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p kubernetes-upgrade-284802                                                                                                                                             │ kubernetes-upgrade-284802    │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │ 19 Nov 25 02:42 UTC │
	│ ssh     │ -p bridge-001617 sudo cat /var/lib/kubelet/config.yaml                                                                                                                   │ bridge-001617                │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │ 19 Nov 25 02:42 UTC │
	│ ssh     │ -p bridge-001617 sudo systemctl status docker --all --full --no-pager                                                                                                    │ bridge-001617                │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │                     │
	│ ssh     │ -p bridge-001617 sudo systemctl cat docker --no-pager                                                                                                                    │ bridge-001617                │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │ 19 Nov 25 02:42 UTC │
	│ ssh     │ -p bridge-001617 sudo cat /etc/docker/daemon.json                                                                                                                        │ bridge-001617                │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │                     │
	│ ssh     │ -p bridge-001617 sudo docker system info                                                                                                                                 │ bridge-001617                │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │                     │
	│ ssh     │ -p bridge-001617 sudo systemctl status cri-docker --all --full --no-pager                                                                                                │ bridge-001617                │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │                     │
	│ ssh     │ -p bridge-001617 sudo systemctl cat cri-docker --no-pager                                                                                                                │ bridge-001617                │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │ 19 Nov 25 02:42 UTC │
	│ ssh     │ -p bridge-001617 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                           │ bridge-001617                │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │                     │
	│ start   │ -p embed-certs-811173 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                   │ embed-certs-811173           │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │                     │
	│ ssh     │ -p bridge-001617 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                     │ bridge-001617                │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │ 19 Nov 25 02:42 UTC │
	│ ssh     │ -p bridge-001617 sudo cri-dockerd --version                                                                                                                              │ bridge-001617                │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │ 19 Nov 25 02:42 UTC │
	│ ssh     │ -p bridge-001617 sudo systemctl status containerd --all --full --no-pager                                                                                                │ bridge-001617                │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │                     │
	│ ssh     │ -p bridge-001617 sudo systemctl cat containerd --no-pager                                                                                                                │ bridge-001617                │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │ 19 Nov 25 02:42 UTC │
	│ ssh     │ -p bridge-001617 sudo cat /lib/systemd/system/containerd.service                                                                                                         │ bridge-001617                │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │ 19 Nov 25 02:42 UTC │
	│ ssh     │ -p bridge-001617 sudo cat /etc/containerd/config.toml                                                                                                                    │ bridge-001617                │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │ 19 Nov 25 02:42 UTC │
	│ ssh     │ -p bridge-001617 sudo containerd config dump                                                                                                                             │ bridge-001617                │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │ 19 Nov 25 02:42 UTC │
	│ ssh     │ -p bridge-001617 sudo systemctl status crio --all --full --no-pager                                                                                                      │ bridge-001617                │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │ 19 Nov 25 02:42 UTC │
	│ ssh     │ -p bridge-001617 sudo systemctl cat crio --no-pager                                                                                                                      │ bridge-001617                │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │ 19 Nov 25 02:42 UTC │
	│ ssh     │ -p bridge-001617 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                            │ bridge-001617                │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │ 19 Nov 25 02:42 UTC │
	│ ssh     │ -p bridge-001617 sudo crio config                                                                                                                                        │ bridge-001617                │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │ 19 Nov 25 02:42 UTC │
	│ delete  │ -p bridge-001617                                                                                                                                                         │ bridge-001617                │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │ 19 Nov 25 02:42 UTC │
	│ delete  │ -p disable-driver-mounts-682232                                                                                                                                          │ disable-driver-mounts-682232 │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │ 19 Nov 25 02:42 UTC │
	│ start   │ -p default-k8s-diff-port-167150 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ default-k8s-diff-port-167150 │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-987573 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                             │ old-k8s-version-987573       │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 02:42:42
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 02:42:42.176241  306860 out.go:360] Setting OutFile to fd 1 ...
	I1119 02:42:42.176542  306860 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:42:42.176552  306860 out.go:374] Setting ErrFile to fd 2...
	I1119 02:42:42.176557  306860 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:42:42.176798  306860 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-11126/.minikube/bin
	I1119 02:42:42.177312  306860 out.go:368] Setting JSON to false
	I1119 02:42:42.178694  306860 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5109,"bootTime":1763515053,"procs":301,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 02:42:42.178817  306860 start.go:143] virtualization: kvm guest
	I1119 02:42:42.181266  306860 out.go:179] * [default-k8s-diff-port-167150] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1119 02:42:42.182506  306860 out.go:179]   - MINIKUBE_LOCATION=21924
	I1119 02:42:42.182508  306860 notify.go:221] Checking for updates...
	I1119 02:42:42.184984  306860 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 02:42:42.186380  306860 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21924-11126/kubeconfig
	I1119 02:42:42.187520  306860 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-11126/.minikube
	I1119 02:42:42.188641  306860 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1119 02:42:42.189749  306860 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 02:42:42.191476  306860 config.go:182] Loaded profile config "embed-certs-811173": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:42:42.191626  306860 config.go:182] Loaded profile config "no-preload-837474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:42:42.191747  306860 config.go:182] Loaded profile config "old-k8s-version-987573": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1119 02:42:42.191879  306860 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 02:42:42.219938  306860 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1119 02:42:42.220096  306860 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 02:42:42.291707  306860 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:81 OomKillDisable:false NGoroutines:93 SystemTime:2025-11-19 02:42:42.280719148 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652060160 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 02:42:42.291851  306860 docker.go:319] overlay module found
	I1119 02:42:42.294039  306860 out.go:179] * Using the docker driver based on user configuration
	I1119 02:42:42.295025  306860 start.go:309] selected driver: docker
	I1119 02:42:42.295045  306860 start.go:930] validating driver "docker" against <nil>
	I1119 02:42:42.295071  306860 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 02:42:42.295643  306860 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 02:42:42.358641  306860 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:81 OomKillDisable:false NGoroutines:93 SystemTime:2025-11-19 02:42:42.347786548 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652060160 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 02:42:42.358876  306860 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1119 02:42:42.359101  306860 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 02:42:42.361283  306860 out.go:179] * Using Docker driver with root privileges
	I1119 02:42:42.362628  306860 cni.go:84] Creating CNI manager for ""
	I1119 02:42:42.362714  306860 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 02:42:42.362728  306860 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1119 02:42:42.362817  306860 start.go:353] cluster config:
	{Name:default-k8s-diff-port-167150 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-167150 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:42:42.364219  306860 out.go:179] * Starting "default-k8s-diff-port-167150" primary control-plane node in "default-k8s-diff-port-167150" cluster
	I1119 02:42:42.367198  306860 cache.go:134] Beginning downloading kic base image for docker with crio
	I1119 02:42:42.368425  306860 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1119 02:42:42.369910  306860 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 02:42:42.369948  306860 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21924-11126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1119 02:42:42.369957  306860 cache.go:65] Caching tarball of preloaded images
	I1119 02:42:42.369996  306860 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1119 02:42:42.370067  306860 preload.go:238] Found /home/jenkins/minikube-integration/21924-11126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1119 02:42:42.370082  306860 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1119 02:42:42.370209  306860 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/config.json ...
	I1119 02:42:42.370241  306860 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/config.json: {Name:mkcddbcc964a690b001741c541d540f001994a84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:42:42.393924  306860 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1119 02:42:42.393944  306860 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1119 02:42:42.393962  306860 cache.go:243] Successfully downloaded all kic artifacts
	I1119 02:42:42.393994  306860 start.go:360] acquireMachinesLock for default-k8s-diff-port-167150: {Name:mk2e469e9e78dab6a8d53f30fec89bc1e449a209 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 02:42:42.394102  306860 start.go:364] duration metric: took 89.942µs to acquireMachinesLock for "default-k8s-diff-port-167150"
	I1119 02:42:42.394130  306860 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-167150 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-167150 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 02:42:42.394220  306860 start.go:125] createHost starting for "" (driver="docker")
	I1119 02:42:39.183788  302848 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21924-11126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-811173:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir: (4.239456846s)
	I1119 02:42:39.183822  302848 kic.go:203] duration metric: took 4.239611554s to extract preloaded images to volume ...
	W1119 02:42:39.183909  302848 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1119 02:42:39.183954  302848 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1119 02:42:39.184001  302848 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1119 02:42:39.255629  302848 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-811173 --name embed-certs-811173 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-811173 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-811173 --network embed-certs-811173 --ip 192.168.85.2 --volume embed-certs-811173:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a
	I1119 02:42:39.648577  302848 cli_runner.go:164] Run: docker container inspect embed-certs-811173 --format={{.State.Running}}
	I1119 02:42:39.668032  302848 cli_runner.go:164] Run: docker container inspect embed-certs-811173 --format={{.State.Status}}
	I1119 02:42:39.687214  302848 cli_runner.go:164] Run: docker exec embed-certs-811173 stat /var/lib/dpkg/alternatives/iptables
	I1119 02:42:39.745898  302848 oci.go:144] the created container "embed-certs-811173" has a running status.
	I1119 02:42:39.745933  302848 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21924-11126/.minikube/machines/embed-certs-811173/id_rsa...
	I1119 02:42:40.188034  302848 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21924-11126/.minikube/machines/embed-certs-811173/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1119 02:42:40.217982  302848 cli_runner.go:164] Run: docker container inspect embed-certs-811173 --format={{.State.Status}}
	I1119 02:42:40.237916  302848 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1119 02:42:40.237940  302848 kic_runner.go:114] Args: [docker exec --privileged embed-certs-811173 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1119 02:42:40.289247  302848 cli_runner.go:164] Run: docker container inspect embed-certs-811173 --format={{.State.Status}}
	I1119 02:42:40.309791  302848 machine.go:94] provisionDockerMachine start ...
	I1119 02:42:40.309919  302848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-811173
	I1119 02:42:40.329857  302848 main.go:143] libmachine: Using SSH client type: native
	I1119 02:42:40.330085  302848 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1119 02:42:40.330094  302848 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 02:42:40.330814  302848 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40226->127.0.0.1:33098: read: connection reset by peer
	I1119 02:42:43.466968  302848 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-811173
	
	I1119 02:42:43.466997  302848 ubuntu.go:182] provisioning hostname "embed-certs-811173"
	I1119 02:42:43.467046  302848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-811173
	I1119 02:42:43.487761  302848 main.go:143] libmachine: Using SSH client type: native
	I1119 02:42:43.488030  302848 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1119 02:42:43.488051  302848 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-811173 && echo "embed-certs-811173" | sudo tee /etc/hostname
	I1119 02:42:43.643097  302848 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-811173
	
	I1119 02:42:43.643198  302848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-811173
	I1119 02:42:43.663378  302848 main.go:143] libmachine: Using SSH client type: native
	I1119 02:42:43.663636  302848 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1119 02:42:43.663655  302848 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-811173' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-811173/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-811173' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 02:42:43.798171  302848 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 02:42:43.798205  302848 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21924-11126/.minikube CaCertPath:/home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21924-11126/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21924-11126/.minikube}
	I1119 02:42:43.798228  302848 ubuntu.go:190] setting up certificates
	I1119 02:42:43.798241  302848 provision.go:84] configureAuth start
	I1119 02:42:43.798305  302848 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-811173
	I1119 02:42:43.819034  302848 provision.go:143] copyHostCerts
	I1119 02:42:43.819102  302848 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-11126/.minikube/cert.pem, removing ...
	I1119 02:42:43.819115  302848 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-11126/.minikube/cert.pem
	I1119 02:42:43.819176  302848 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21924-11126/.minikube/cert.pem (1123 bytes)
	I1119 02:42:43.819262  302848 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-11126/.minikube/key.pem, removing ...
	I1119 02:42:43.819270  302848 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-11126/.minikube/key.pem
	I1119 02:42:43.819297  302848 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21924-11126/.minikube/key.pem (1675 bytes)
	I1119 02:42:43.819360  302848 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-11126/.minikube/ca.pem, removing ...
	I1119 02:42:43.819368  302848 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-11126/.minikube/ca.pem
	I1119 02:42:43.819392  302848 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21924-11126/.minikube/ca.pem (1082 bytes)
	I1119 02:42:43.819475  302848 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21924-11126/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca-key.pem org=jenkins.embed-certs-811173 san=[127.0.0.1 192.168.85.2 embed-certs-811173 localhost minikube]
	I1119 02:42:44.009209  302848 provision.go:177] copyRemoteCerts
	I1119 02:42:44.009280  302848 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 02:42:44.009327  302848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-811173
	I1119 02:42:44.029510  302848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/embed-certs-811173/id_rsa Username:docker}
	I1119 02:42:40.627209  299668 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.471693628s)
	I1119 02:42:40.627247  299668 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1119 02:42:40.627277  299668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1119 02:42:40.627374  299668 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.472009201s)
	I1119 02:42:40.627402  299668 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21924-11126/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1119 02:42:40.627449  299668 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1119 02:42:40.627495  299668 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	I1119 02:42:42.166462  299668 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (1.538920814s)
	I1119 02:42:42.166489  299668 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21924-11126/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1119 02:42:42.166520  299668 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1119 02:42:42.166567  299668 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1119 02:42:43.179025  299668 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1: (1.012437665s)
	I1119 02:42:43.179053  299668 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21924-11126/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1119 02:42:43.179080  299668 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1119 02:42:43.179117  299668 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	I1119 02:42:41.454319  291163 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1119 02:42:41.462416  291163 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1119 02:42:41.462446  291163 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1119 02:42:41.496324  291163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1119 02:42:42.356676  291163 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1119 02:42:42.356833  291163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-987573 minikube.k8s.io/updated_at=2025_11_19T02_42_42_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=ebd49efe8b7d44f89fc96e21beffcd64dfee8277 minikube.k8s.io/name=old-k8s-version-987573 minikube.k8s.io/primary=true
	I1119 02:42:42.356833  291163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:42:42.367139  291163 ops.go:34] apiserver oom_adj: -16
	I1119 02:42:42.457034  291163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:42:42.957751  291163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:42:43.457688  291163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:42:43.957153  291163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:42:44.457654  291163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:42:44.957568  291163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:42:45.457760  291163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:42:42.395695  306860 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1119 02:42:42.395917  306860 start.go:159] libmachine.API.Create for "default-k8s-diff-port-167150" (driver="docker")
	I1119 02:42:42.395950  306860 client.go:173] LocalClient.Create starting
	I1119 02:42:42.396027  306860 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem
	I1119 02:42:42.396063  306860 main.go:143] libmachine: Decoding PEM data...
	I1119 02:42:42.396092  306860 main.go:143] libmachine: Parsing certificate...
	I1119 02:42:42.396166  306860 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21924-11126/.minikube/certs/cert.pem
	I1119 02:42:42.396197  306860 main.go:143] libmachine: Decoding PEM data...
	I1119 02:42:42.396215  306860 main.go:143] libmachine: Parsing certificate...
	I1119 02:42:42.396556  306860 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-167150 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1119 02:42:42.414929  306860 cli_runner.go:211] docker network inspect default-k8s-diff-port-167150 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1119 02:42:42.415012  306860 network_create.go:284] running [docker network inspect default-k8s-diff-port-167150] to gather additional debugging logs...
	I1119 02:42:42.415033  306860 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-167150
	W1119 02:42:42.434734  306860 cli_runner.go:211] docker network inspect default-k8s-diff-port-167150 returned with exit code 1
	I1119 02:42:42.434765  306860 network_create.go:287] error running [docker network inspect default-k8s-diff-port-167150]: docker network inspect default-k8s-diff-port-167150: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-167150 not found
	I1119 02:42:42.434797  306860 network_create.go:289] output of [docker network inspect default-k8s-diff-port-167150]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-167150 not found
	
	** /stderr **
	I1119 02:42:42.434886  306860 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 02:42:42.454554  306860 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-84ce244e4c23 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:16:55:7c:db:e3:4e} reservation:<nil>}
	I1119 02:42:42.455185  306860 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-70e7d73f86d8 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:8a:64:3f:46:8e:7a} reservation:<nil>}
	I1119 02:42:42.455956  306860 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-d7ef477b5a23 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:fa:eb:22:b3:62:92} reservation:<nil>}
	I1119 02:42:42.456451  306860 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-4d7fb52c0aef IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:1a:ad:9c:9a:f3:90} reservation:<nil>}
	I1119 02:42:42.457310  306860 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-3129c4b60559 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:0e:04:d6:88:46:9c} reservation:<nil>}
	I1119 02:42:42.458231  306860 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f14070}
	I1119 02:42:42.458263  306860 network_create.go:124] attempt to create docker network default-k8s-diff-port-167150 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1119 02:42:42.458321  306860 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-167150 default-k8s-diff-port-167150
	I1119 02:42:42.508901  306860 network_create.go:108] docker network default-k8s-diff-port-167150 192.168.94.0/24 created
	I1119 02:42:42.508935  306860 kic.go:121] calculated static IP "192.168.94.2" for the "default-k8s-diff-port-167150" container
	I1119 02:42:42.509018  306860 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1119 02:42:42.530727  306860 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-167150 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-167150 --label created_by.minikube.sigs.k8s.io=true
	I1119 02:42:42.549909  306860 oci.go:103] Successfully created a docker volume default-k8s-diff-port-167150
	I1119 02:42:42.549999  306860 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-167150-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-167150 --entrypoint /usr/bin/test -v default-k8s-diff-port-167150:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -d /var/lib
	I1119 02:42:43.411678  306860 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-167150
	I1119 02:42:43.411748  306860 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 02:42:43.411762  306860 kic.go:194] Starting extracting preloaded images to volume ...
	I1119 02:42:43.411813  306860 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21924-11126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-167150:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir
	I1119 02:42:44.129173  302848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1119 02:42:44.149365  302848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1119 02:42:44.166610  302848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1119 02:42:44.183427  302848 provision.go:87] duration metric: took 385.168944ms to configureAuth
	I1119 02:42:44.183464  302848 ubuntu.go:206] setting minikube options for container-runtime
	I1119 02:42:44.183643  302848 config.go:182] Loaded profile config "embed-certs-811173": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:42:44.183766  302848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-811173
	I1119 02:42:44.202233  302848 main.go:143] libmachine: Using SSH client type: native
	I1119 02:42:44.202417  302848 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1119 02:42:44.202444  302848 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 02:42:44.503275  302848 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 02:42:44.503306  302848 machine.go:97] duration metric: took 4.193483812s to provisionDockerMachine
	I1119 02:42:44.503317  302848 client.go:176] duration metric: took 10.179262279s to LocalClient.Create
	I1119 02:42:44.503337  302848 start.go:167] duration metric: took 10.179334886s to libmachine.API.Create "embed-certs-811173"
	I1119 02:42:44.503346  302848 start.go:293] postStartSetup for "embed-certs-811173" (driver="docker")
	I1119 02:42:44.503358  302848 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 02:42:44.503415  302848 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 02:42:44.503480  302848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-811173
	I1119 02:42:44.526986  302848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/embed-certs-811173/id_rsa Username:docker}
	I1119 02:42:44.639041  302848 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 02:42:44.644425  302848 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 02:42:44.644489  302848 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 02:42:44.644502  302848 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-11126/.minikube/addons for local assets ...
	I1119 02:42:44.644562  302848 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-11126/.minikube/files for local assets ...
	I1119 02:42:44.644662  302848 filesync.go:149] local asset: /home/jenkins/minikube-integration/21924-11126/.minikube/files/etc/ssl/certs/146342.pem -> 146342.pem in /etc/ssl/certs
	I1119 02:42:44.644802  302848 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 02:42:44.657698  302848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/files/etc/ssl/certs/146342.pem --> /etc/ssl/certs/146342.pem (1708 bytes)
	I1119 02:42:44.684627  302848 start.go:296] duration metric: took 181.267139ms for postStartSetup
	I1119 02:42:44.685672  302848 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-811173
	I1119 02:42:44.709637  302848 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/config.json ...
	I1119 02:42:44.709970  302848 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 02:42:44.710086  302848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-811173
	I1119 02:42:44.735883  302848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/embed-certs-811173/id_rsa Username:docker}
	I1119 02:42:44.842589  302848 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 02:42:44.848370  302848 start.go:128] duration metric: took 10.52622031s to createHost
	I1119 02:42:44.848397  302848 start.go:83] releasing machines lock for "embed-certs-811173", held for 10.526348738s
	I1119 02:42:44.848480  302848 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-811173
	I1119 02:42:44.873209  302848 ssh_runner.go:195] Run: cat /version.json
	I1119 02:42:44.873265  302848 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 02:42:44.873267  302848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-811173
	I1119 02:42:44.873325  302848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-811173
	I1119 02:42:44.895290  302848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/embed-certs-811173/id_rsa Username:docker}
	I1119 02:42:44.896255  302848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/embed-certs-811173/id_rsa Username:docker}
	I1119 02:42:45.089046  302848 ssh_runner.go:195] Run: systemctl --version
	I1119 02:42:45.096166  302848 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 02:42:45.135030  302848 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 02:42:45.140127  302848 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 02:42:45.140199  302848 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 02:42:45.170487  302848 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1119 02:42:45.170513  302848 start.go:496] detecting cgroup driver to use...
	I1119 02:42:45.170545  302848 detect.go:190] detected "systemd" cgroup driver on host os
	I1119 02:42:45.170595  302848 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 02:42:45.188031  302848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 02:42:45.201633  302848 docker.go:218] disabling cri-docker service (if available) ...
	I1119 02:42:45.201682  302848 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 02:42:45.219175  302848 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 02:42:45.238631  302848 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 02:42:45.357829  302848 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 02:42:45.467480  302848 docker.go:234] disabling docker service ...
	I1119 02:42:45.467546  302848 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 02:42:45.493546  302848 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 02:42:45.508908  302848 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 02:42:45.630796  302848 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 02:42:45.744606  302848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 02:42:45.758583  302848 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 02:42:45.802834  302848 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 02:42:45.802888  302848 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:42:45.815732  302848 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1119 02:42:45.815833  302848 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:42:45.825707  302848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:42:45.847178  302848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:42:45.877522  302848 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 02:42:45.886218  302848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:42:45.939829  302848 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:42:46.000872  302848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:42:46.058642  302848 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 02:42:46.066800  302848 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 02:42:46.074598  302848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:42:46.154622  302848 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 02:42:49.212232  302848 ssh_runner.go:235] Completed: sudo systemctl restart crio: (3.057563682s)
	I1119 02:42:49.212266  302848 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 02:42:49.212309  302848 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 02:42:49.217067  302848 start.go:564] Will wait 60s for crictl version
	I1119 02:42:49.217124  302848 ssh_runner.go:195] Run: which crictl
	I1119 02:42:49.221132  302848 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 02:42:49.251469  302848 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1119 02:42:49.251561  302848 ssh_runner.go:195] Run: crio --version
	I1119 02:42:49.280463  302848 ssh_runner.go:195] Run: crio --version
	I1119 02:42:49.310498  302848 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1119 02:42:48.297963  299668 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (5.118818905s)
	I1119 02:42:48.297993  299668 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21924-11126/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1119 02:42:48.298019  299668 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1119 02:42:48.298066  299668 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1119 02:42:49.881405  299668 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.583300882s)
	I1119 02:42:49.881450  299668 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21924-11126/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1119 02:42:49.881479  299668 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1119 02:42:49.881558  299668 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1
	I1119 02:42:45.957339  291163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:42:46.457346  291163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:42:46.957840  291163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:42:47.457460  291163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:42:47.957489  291163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:42:48.457490  291163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:42:48.957548  291163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:42:49.457120  291163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:42:49.957332  291163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:42:50.457258  291163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:42:49.311873  302848 cli_runner.go:164] Run: docker network inspect embed-certs-811173 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 02:42:49.337627  302848 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1119 02:42:49.343117  302848 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 02:42:49.363673  302848 kubeadm.go:884] updating cluster {Name:embed-certs-811173 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-811173 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 02:42:49.363803  302848 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 02:42:49.363881  302848 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 02:42:49.402301  302848 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 02:42:49.402327  302848 crio.go:433] Images already preloaded, skipping extraction
	I1119 02:42:49.402381  302848 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 02:42:49.432172  302848 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 02:42:49.432198  302848 cache_images.go:86] Images are preloaded, skipping loading
	I1119 02:42:49.432208  302848 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1119 02:42:49.432312  302848 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-811173 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-811173 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 02:42:49.432394  302848 ssh_runner.go:195] Run: crio config
	I1119 02:42:49.490697  302848 cni.go:84] Creating CNI manager for ""
	I1119 02:42:49.490766  302848 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 02:42:49.490806  302848 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 02:42:49.490847  302848 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-811173 NodeName:embed-certs-811173 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 02:42:49.491024  302848 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-811173"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 02:42:49.491099  302848 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 02:42:49.501687  302848 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 02:42:49.501746  302848 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 02:42:49.512773  302848 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1119 02:42:49.533263  302848 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 02:42:49.552949  302848 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1119 02:42:49.567525  302848 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1119 02:42:49.572161  302848 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 02:42:49.583669  302848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:42:49.696403  302848 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 02:42:49.727028  302848 certs.go:69] Setting up /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173 for IP: 192.168.85.2
	I1119 02:42:49.727140  302848 certs.go:195] generating shared ca certs ...
	I1119 02:42:49.727168  302848 certs.go:227] acquiring lock for ca certs: {Name:mkfa94aafd627ee4c1a185e05aa520339d3c22d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:42:49.727476  302848 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21924-11126/.minikube/ca.key
	I1119 02:42:49.727544  302848 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21924-11126/.minikube/proxy-client-ca.key
	I1119 02:42:49.727557  302848 certs.go:257] generating profile certs ...
	I1119 02:42:49.727625  302848 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/client.key
	I1119 02:42:49.727650  302848 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/client.crt with IP's: []
	I1119 02:42:50.145686  302848 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/client.crt ...
	I1119 02:42:50.145726  302848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/client.crt: {Name:mke65652a37d1645724814d58214d8122c0736b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:42:50.145910  302848 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/client.key ...
	I1119 02:42:50.145933  302848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/client.key: {Name:mk4ef5d0666a41b73aa30b3e0755e11f9f8fb3bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:42:50.146056  302848 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/apiserver.key.a0a915e4
	I1119 02:42:50.146079  302848 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/apiserver.crt.a0a915e4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1119 02:42:50.407271  302848 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/apiserver.crt.a0a915e4 ...
	I1119 02:42:50.407295  302848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/apiserver.crt.a0a915e4: {Name:mk5f035a33d372bd059255b16679fd50e2c33fba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:42:50.407442  302848 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/apiserver.key.a0a915e4 ...
	I1119 02:42:50.407456  302848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/apiserver.key.a0a915e4: {Name:mka92b1af7e6c09f8bfc52286518647800bcb5a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:42:50.407529  302848 certs.go:382] copying /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/apiserver.crt.a0a915e4 -> /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/apiserver.crt
	I1119 02:42:50.407602  302848 certs.go:386] copying /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/apiserver.key.a0a915e4 -> /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/apiserver.key
	I1119 02:42:50.407658  302848 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/proxy-client.key
	I1119 02:42:50.407673  302848 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/proxy-client.crt with IP's: []
	I1119 02:42:51.018427  302848 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/proxy-client.crt ...
	I1119 02:42:51.018475  302848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/proxy-client.crt: {Name:mkaf83dc022cbae8f555c0ae724724cf38e2e4bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:42:51.018641  302848 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/proxy-client.key ...
	I1119 02:42:51.018703  302848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/proxy-client.key: {Name:mk810704305f00f9b6af79898dc7dd3a9f2fe056 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:42:51.018949  302848 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/14634.pem (1338 bytes)
	W1119 02:42:51.019001  302848 certs.go:480] ignoring /home/jenkins/minikube-integration/21924-11126/.minikube/certs/14634_empty.pem, impossibly tiny 0 bytes
	I1119 02:42:51.019016  302848 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca-key.pem (1671 bytes)
	I1119 02:42:51.019050  302848 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem (1082 bytes)
	I1119 02:42:51.019085  302848 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/cert.pem (1123 bytes)
	I1119 02:42:51.019116  302848 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/key.pem (1675 bytes)
	I1119 02:42:51.019168  302848 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/files/etc/ssl/certs/146342.pem (1708 bytes)
	I1119 02:42:51.019875  302848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 02:42:51.045884  302848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1119 02:42:51.068119  302848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 02:42:51.085405  302848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1119 02:42:51.102412  302848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1119 02:42:51.119942  302848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1119 02:42:51.141845  302848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 02:42:51.163668  302848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1119 02:42:51.185276  302848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/files/etc/ssl/certs/146342.pem --> /usr/share/ca-certificates/146342.pem (1708 bytes)
	I1119 02:42:51.206376  302848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 02:42:51.223822  302848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/certs/14634.pem --> /usr/share/ca-certificates/14634.pem (1338 bytes)
	I1119 02:42:51.240933  302848 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 02:42:51.254070  302848 ssh_runner.go:195] Run: openssl version
	I1119 02:42:51.260133  302848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 02:42:51.268759  302848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:42:51.272373  302848 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 01:56 /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:42:51.272418  302848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:42:51.314661  302848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 02:42:51.325625  302848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14634.pem && ln -fs /usr/share/ca-certificates/14634.pem /etc/ssl/certs/14634.pem"
	I1119 02:42:51.335401  302848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14634.pem
	I1119 02:42:51.339792  302848 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 02:02 /usr/share/ca-certificates/14634.pem
	I1119 02:42:51.339844  302848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14634.pem
	I1119 02:42:51.374219  302848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14634.pem /etc/ssl/certs/51391683.0"
	I1119 02:42:51.382719  302848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/146342.pem && ln -fs /usr/share/ca-certificates/146342.pem /etc/ssl/certs/146342.pem"
	I1119 02:42:51.391325  302848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/146342.pem
	I1119 02:42:51.395186  302848 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 02:02 /usr/share/ca-certificates/146342.pem
	I1119 02:42:51.395235  302848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/146342.pem
	I1119 02:42:51.433387  302848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/146342.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 02:42:51.441878  302848 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 02:42:51.446149  302848 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1119 02:42:51.446206  302848 kubeadm.go:401] StartCluster: {Name:embed-certs-811173 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-811173 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:42:51.446288  302848 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 02:42:51.446341  302848 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 02:42:51.474545  302848 cri.go:89] found id: ""
	I1119 02:42:51.474598  302848 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 02:42:51.483078  302848 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1119 02:42:51.491910  302848 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1119 02:42:51.491960  302848 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1119 02:42:51.500593  302848 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1119 02:42:51.500610  302848 kubeadm.go:158] found existing configuration files:
	
	I1119 02:42:51.500655  302848 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1119 02:42:51.508497  302848 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1119 02:42:51.508546  302848 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1119 02:42:51.516422  302848 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1119 02:42:51.525757  302848 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1119 02:42:51.525807  302848 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1119 02:42:51.536275  302848 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1119 02:42:51.545935  302848 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1119 02:42:51.545987  302848 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1119 02:42:51.554976  302848 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1119 02:42:51.563559  302848 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1119 02:42:51.563604  302848 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1119 02:42:51.570652  302848 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1119 02:42:51.615030  302848 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1119 02:42:51.615151  302848 kubeadm.go:319] [preflight] Running pre-flight checks
	I1119 02:42:51.639511  302848 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1119 02:42:51.639676  302848 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1119 02:42:51.639872  302848 kubeadm.go:319] OS: Linux
	I1119 02:42:51.639979  302848 kubeadm.go:319] CGROUPS_CPU: enabled
	I1119 02:42:51.640073  302848 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1119 02:42:51.640147  302848 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1119 02:42:51.640208  302848 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1119 02:42:51.640267  302848 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1119 02:42:51.640326  302848 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1119 02:42:51.640387  302848 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1119 02:42:51.640451  302848 kubeadm.go:319] CGROUPS_IO: enabled
	I1119 02:42:51.708966  302848 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1119 02:42:51.709135  302848 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1119 02:42:51.709283  302848 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1119 02:42:51.716801  302848 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1119 02:42:49.083522  306860 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21924-11126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-167150:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir: (5.671639706s)
	I1119 02:42:49.083553  306860 kic.go:203] duration metric: took 5.671789118s to extract preloaded images to volume ...
	W1119 02:42:49.083624  306860 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1119 02:42:49.083651  306860 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1119 02:42:49.083684  306860 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1119 02:42:49.149882  306860 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-167150 --name default-k8s-diff-port-167150 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-167150 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-167150 --network default-k8s-diff-port-167150 --ip 192.168.94.2 --volume default-k8s-diff-port-167150:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a
	I1119 02:42:49.500594  306860 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-167150 --format={{.State.Running}}
	I1119 02:42:49.523895  306860 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-167150 --format={{.State.Status}}
	I1119 02:42:49.547442  306860 cli_runner.go:164] Run: docker exec default-k8s-diff-port-167150 stat /var/lib/dpkg/alternatives/iptables
	I1119 02:42:49.600101  306860 oci.go:144] the created container "default-k8s-diff-port-167150" has a running status.
	I1119 02:42:49.600142  306860 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21924-11126/.minikube/machines/default-k8s-diff-port-167150/id_rsa...
	I1119 02:42:50.269489  306860 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21924-11126/.minikube/machines/default-k8s-diff-port-167150/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1119 02:42:50.295459  306860 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-167150 --format={{.State.Status}}
	I1119 02:42:50.315528  306860 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1119 02:42:50.315562  306860 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-167150 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1119 02:42:50.356860  306860 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-167150 --format={{.State.Status}}
	I1119 02:42:50.374600  306860 machine.go:94] provisionDockerMachine start ...
	I1119 02:42:50.374689  306860 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-167150
	I1119 02:42:50.391114  306860 main.go:143] libmachine: Using SSH client type: native
	I1119 02:42:50.391363  306860 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1119 02:42:50.391382  306860 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 02:42:50.523354  306860 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-167150
	
	I1119 02:42:50.523388  306860 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-167150"
	I1119 02:42:50.523491  306860 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-167150
	I1119 02:42:50.548578  306860 main.go:143] libmachine: Using SSH client type: native
	I1119 02:42:50.549009  306860 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1119 02:42:50.549031  306860 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-167150 && echo "default-k8s-diff-port-167150" | sudo tee /etc/hostname
	I1119 02:42:50.708967  306860 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-167150
	
	I1119 02:42:50.709056  306860 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-167150
	I1119 02:42:50.729860  306860 main.go:143] libmachine: Using SSH client type: native
	I1119 02:42:50.730154  306860 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1119 02:42:50.730186  306860 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-167150' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-167150/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-167150' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 02:42:50.877302  306860 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 02:42:50.877332  306860 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21924-11126/.minikube CaCertPath:/home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21924-11126/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21924-11126/.minikube}
	I1119 02:42:50.877354  306860 ubuntu.go:190] setting up certificates
	I1119 02:42:50.877366  306860 provision.go:84] configureAuth start
	I1119 02:42:50.877421  306860 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-167150
	I1119 02:42:50.899681  306860 provision.go:143] copyHostCerts
	I1119 02:42:50.899742  306860 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-11126/.minikube/cert.pem, removing ...
	I1119 02:42:50.899755  306860 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-11126/.minikube/cert.pem
	I1119 02:42:50.899823  306860 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21924-11126/.minikube/cert.pem (1123 bytes)
	I1119 02:42:50.899935  306860 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-11126/.minikube/key.pem, removing ...
	I1119 02:42:50.899952  306860 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-11126/.minikube/key.pem
	I1119 02:42:50.899994  306860 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21924-11126/.minikube/key.pem (1675 bytes)
	I1119 02:42:50.900091  306860 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-11126/.minikube/ca.pem, removing ...
	I1119 02:42:50.900100  306860 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-11126/.minikube/ca.pem
	I1119 02:42:50.900133  306860 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21924-11126/.minikube/ca.pem (1082 bytes)
	I1119 02:42:50.900206  306860 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21924-11126/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-167150 san=[127.0.0.1 192.168.94.2 default-k8s-diff-port-167150 localhost minikube]
	I1119 02:42:51.790042  306860 provision.go:177] copyRemoteCerts
	I1119 02:42:51.790120  306860 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 02:42:51.790163  306860 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-167150
	I1119 02:42:51.812679  306860 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/default-k8s-diff-port-167150/id_rsa Username:docker}
	I1119 02:42:51.914566  306860 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1119 02:42:51.933520  306860 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1119 02:42:51.951210  306860 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1119 02:42:51.972791  306860 provision.go:87] duration metric: took 1.095412973s to configureAuth
	I1119 02:42:51.972820  306860 ubuntu.go:206] setting minikube options for container-runtime
	I1119 02:42:51.973010  306860 config.go:182] Loaded profile config "default-k8s-diff-port-167150": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:42:51.973126  306860 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-167150
	I1119 02:42:51.993887  306860 main.go:143] libmachine: Using SSH client type: native
	I1119 02:42:51.994333  306860 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1119 02:42:51.994382  306860 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 02:42:51.720233  302848 out.go:252]   - Generating certificates and keys ...
	I1119 02:42:51.720329  302848 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1119 02:42:51.720424  302848 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1119 02:42:52.110567  302848 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1119 02:42:52.469402  302848 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1119 02:42:52.783731  302848 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1119 02:42:53.170607  302848 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1119 02:42:53.607637  302848 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1119 02:42:53.607789  302848 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-811173 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1119 02:42:52.305265  306860 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 02:42:52.305290  306860 machine.go:97] duration metric: took 1.930670923s to provisionDockerMachine
	I1119 02:42:52.305303  306860 client.go:176] duration metric: took 9.909346044s to LocalClient.Create
	I1119 02:42:52.305321  306860 start.go:167] duration metric: took 9.909403032s to libmachine.API.Create "default-k8s-diff-port-167150"
	I1119 02:42:52.305331  306860 start.go:293] postStartSetup for "default-k8s-diff-port-167150" (driver="docker")
	I1119 02:42:52.305347  306860 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 02:42:52.305414  306860 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 02:42:52.305477  306860 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-167150
	I1119 02:42:52.326893  306860 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/default-k8s-diff-port-167150/id_rsa Username:docker}
	I1119 02:42:52.427784  306860 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 02:42:52.432280  306860 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 02:42:52.432314  306860 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 02:42:52.432326  306860 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-11126/.minikube/addons for local assets ...
	I1119 02:42:52.432378  306860 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-11126/.minikube/files for local assets ...
	I1119 02:42:52.432493  306860 filesync.go:149] local asset: /home/jenkins/minikube-integration/21924-11126/.minikube/files/etc/ssl/certs/146342.pem -> 146342.pem in /etc/ssl/certs
	I1119 02:42:52.432606  306860 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 02:42:52.440486  306860 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/files/etc/ssl/certs/146342.pem --> /etc/ssl/certs/146342.pem (1708 bytes)
	I1119 02:42:52.461537  306860 start.go:296] duration metric: took 156.190397ms for postStartSetup
	I1119 02:42:52.461851  306860 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-167150
	I1119 02:42:52.483860  306860 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/config.json ...
	I1119 02:42:52.484137  306860 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 02:42:52.484184  306860 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-167150
	I1119 02:42:52.504090  306860 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/default-k8s-diff-port-167150/id_rsa Username:docker}
	I1119 02:42:52.602388  306860 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 02:42:52.607059  306860 start.go:128] duration metric: took 10.212819294s to createHost
	I1119 02:42:52.607086  306860 start.go:83] releasing machines lock for "default-k8s-diff-port-167150", held for 10.212970587s
	I1119 02:42:52.607148  306860 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-167150
	I1119 02:42:52.626059  306860 ssh_runner.go:195] Run: cat /version.json
	I1119 02:42:52.626109  306860 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-167150
	I1119 02:42:52.626132  306860 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 02:42:52.626195  306860 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-167150
	I1119 02:42:52.646677  306860 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/default-k8s-diff-port-167150/id_rsa Username:docker}
	I1119 02:42:52.647867  306860 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/default-k8s-diff-port-167150/id_rsa Username:docker}
	I1119 02:42:52.822035  306860 ssh_runner.go:195] Run: systemctl --version
	I1119 02:42:52.831419  306860 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 02:42:52.869148  306860 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 02:42:52.873990  306860 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 02:42:52.874068  306860 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 02:42:52.901044  306860 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1119 02:42:52.901066  306860 start.go:496] detecting cgroup driver to use...
	I1119 02:42:52.901097  306860 detect.go:190] detected "systemd" cgroup driver on host os
	I1119 02:42:52.901141  306860 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 02:42:52.917792  306860 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 02:42:52.932809  306860 docker.go:218] disabling cri-docker service (if available) ...
	I1119 02:42:52.932864  306860 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 02:42:52.953113  306860 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 02:42:52.974059  306860 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 02:42:53.085982  306860 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 02:42:53.191486  306860 docker.go:234] disabling docker service ...
	I1119 02:42:53.191545  306860 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 02:42:53.209965  306860 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 02:42:53.222536  306860 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 02:42:53.334426  306860 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 02:42:53.452134  306860 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 02:42:53.470021  306860 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 02:42:53.491692  306860 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 02:42:53.491759  306860 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:42:53.507808  306860 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1119 02:42:53.507878  306860 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:42:53.521160  306860 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:42:53.533686  306860 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:42:53.545419  306860 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 02:42:53.559221  306860 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:42:53.572537  306860 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:42:53.591930  306860 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:42:53.604233  306860 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 02:42:53.612761  306860 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 02:42:53.620567  306860 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:42:53.702418  306860 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 02:42:54.895903  306860 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.19344223s)
	I1119 02:42:54.895934  306860 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 02:42:54.895987  306860 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 02:42:54.899921  306860 start.go:564] Will wait 60s for crictl version
	I1119 02:42:54.899979  306860 ssh_runner.go:195] Run: which crictl
	I1119 02:42:54.903499  306860 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 02:42:54.927965  306860 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1119 02:42:54.928037  306860 ssh_runner.go:195] Run: crio --version
	I1119 02:42:54.960299  306860 ssh_runner.go:195] Run: crio --version
	I1119 02:42:55.000689  306860 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1119 02:42:51.242518  299668 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1: (1.36093376s)
	I1119 02:42:51.242553  299668 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21924-11126/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1119 02:42:51.242587  299668 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1119 02:42:51.242638  299668 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1119 02:42:51.884817  299668 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21924-11126/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1119 02:42:51.884865  299668 cache_images.go:125] Successfully loaded all cached images
	I1119 02:42:51.884872  299668 cache_images.go:94] duration metric: took 16.678403063s to LoadCachedImages
	I1119 02:42:51.884886  299668 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1119 02:42:51.884977  299668 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-837474 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-837474 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 02:42:51.885077  299668 ssh_runner.go:195] Run: crio config
	I1119 02:42:51.934055  299668 cni.go:84] Creating CNI manager for ""
	I1119 02:42:51.934075  299668 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 02:42:51.934089  299668 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 02:42:51.934107  299668 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-837474 NodeName:no-preload-837474 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 02:42:51.934256  299668 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-837474"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 02:42:51.934344  299668 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 02:42:51.942351  299668 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1119 02:42:51.942409  299668 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1119 02:42:51.950268  299668 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
	I1119 02:42:51.950341  299668 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/21924-11126/.minikube/cache/linux/amd64/v1.34.1/kubelet
	I1119 02:42:51.950376  299668 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21924-11126/.minikube/cache/linux/amd64/v1.34.1/kubeadm
	I1119 02:42:51.950348  299668 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1119 02:42:51.954459  299668 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1119 02:42:51.954493  299668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/cache/linux/amd64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (60559544 bytes)
	I1119 02:42:53.238137  299668 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 02:42:53.257679  299668 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1119 02:42:53.263721  299668 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1119 02:42:53.263752  299668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/cache/linux/amd64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (59195684 bytes)
	I1119 02:42:53.344069  299668 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1119 02:42:53.351667  299668 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1119 02:42:53.351703  299668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/cache/linux/amd64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (74027192 bytes)
	I1119 02:42:53.612715  299668 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 02:42:53.620479  299668 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1119 02:42:53.633087  299668 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 02:42:53.657867  299668 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1119 02:42:53.670102  299668 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1119 02:42:53.673427  299668 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 02:42:53.683353  299668 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:42:53.768236  299668 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 02:42:53.789788  299668 certs.go:69] Setting up /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474 for IP: 192.168.103.2
	I1119 02:42:53.789809  299668 certs.go:195] generating shared ca certs ...
	I1119 02:42:53.789829  299668 certs.go:227] acquiring lock for ca certs: {Name:mkfa94aafd627ee4c1a185e05aa520339d3c22d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:42:53.789987  299668 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21924-11126/.minikube/ca.key
	I1119 02:42:53.790033  299668 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21924-11126/.minikube/proxy-client-ca.key
	I1119 02:42:53.790044  299668 certs.go:257] generating profile certs ...
	I1119 02:42:53.790109  299668 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/client.key
	I1119 02:42:53.790124  299668 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/client.crt with IP's: []
	I1119 02:42:54.153349  299668 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/client.crt ...
	I1119 02:42:54.153376  299668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/client.crt: {Name:mk582fda973473014e16fbac704f7616a0f6aa62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:42:54.162415  299668 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/client.key ...
	I1119 02:42:54.162455  299668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/client.key: {Name:mkf82ec201b7ec108f85e3c1cb709e2e0c644536 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:42:54.162615  299668 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/apiserver.key.2f093449
	I1119 02:42:54.162634  299668 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/apiserver.crt.2f093449 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1119 02:42:50.957718  291163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:42:51.457622  291163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:42:51.958197  291163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:42:52.457608  291163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:42:52.957737  291163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:42:53.457646  291163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:42:53.957900  291163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:42:54.457538  291163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:42:54.957631  291163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:42:55.052333  291163 kubeadm.go:1114] duration metric: took 12.695568902s to wait for elevateKubeSystemPrivileges
	I1119 02:42:55.052368  291163 kubeadm.go:403] duration metric: took 26.311686714s to StartCluster
	I1119 02:42:55.052395  291163 settings.go:142] acquiring lock: {Name:mk39bd61177f2f8808b442f427cccc3032092975 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:42:55.052484  291163 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21924-11126/kubeconfig
	I1119 02:42:55.053537  291163 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/kubeconfig: {Name:mkb0d0ec188d51add3fa67ae624c9c11581068d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:42:55.053789  291163 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1119 02:42:55.053803  291163 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 02:42:55.053872  291163 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 02:42:55.053963  291163 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-987573"
	I1119 02:42:55.053987  291163 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-987573"
	I1119 02:42:55.054018  291163 host.go:66] Checking if "old-k8s-version-987573" exists ...
	I1119 02:42:55.054054  291163 config.go:182] Loaded profile config "old-k8s-version-987573": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1119 02:42:55.054262  291163 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-987573"
	I1119 02:42:55.054313  291163 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-987573"
	I1119 02:42:55.054691  291163 cli_runner.go:164] Run: docker container inspect old-k8s-version-987573 --format={{.State.Status}}
	I1119 02:42:55.054736  291163 cli_runner.go:164] Run: docker container inspect old-k8s-version-987573 --format={{.State.Status}}
	I1119 02:42:55.058586  291163 out.go:179] * Verifying Kubernetes components...
	I1119 02:42:55.060065  291163 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:42:55.084656  291163 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-987573"
	I1119 02:42:55.084747  291163 host.go:66] Checking if "old-k8s-version-987573" exists ...
	I1119 02:42:55.085405  291163 cli_runner.go:164] Run: docker container inspect old-k8s-version-987573 --format={{.State.Status}}
	I1119 02:42:55.085634  291163 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 02:42:55.086927  291163 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 02:42:55.086947  291163 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 02:42:55.086995  291163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-987573
	I1119 02:42:55.121554  291163 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 02:42:55.121580  291163 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 02:42:55.121762  291163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-987573
	I1119 02:42:55.128371  291163 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/old-k8s-version-987573/id_rsa Username:docker}
	I1119 02:42:55.160205  291163 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/old-k8s-version-987573/id_rsa Username:docker}
	I1119 02:42:55.181208  291163 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1119 02:42:55.259110  291163 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 02:42:55.264651  291163 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 02:42:55.282490  291163 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 02:42:55.568676  291163 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1119 02:42:55.569719  291163 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-987573" to be "Ready" ...
	I1119 02:42:55.795625  291163 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1119 02:42:55.796948  291163 addons.go:515] duration metric: took 743.057906ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1119 02:42:54.248395  302848 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1119 02:42:54.248580  302848 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-811173 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1119 02:42:54.313308  302848 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1119 02:42:54.706382  302848 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1119 02:42:54.983151  302848 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1119 02:42:54.983371  302848 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1119 02:42:55.301965  302848 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1119 02:42:55.490617  302848 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1119 02:42:55.599136  302848 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1119 02:42:55.872895  302848 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1119 02:42:56.305311  302848 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1119 02:42:56.308494  302848 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1119 02:42:56.312387  302848 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1119 02:42:55.174521  299668 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/apiserver.crt.2f093449 ...
	I1119 02:42:55.174557  299668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/apiserver.crt.2f093449: {Name:mk5097a5f345e6abc2d685019cd0e0e0dd64d577 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:42:55.174776  299668 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/apiserver.key.2f093449 ...
	I1119 02:42:55.174793  299668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/apiserver.key.2f093449: {Name:mkab8fc1530b6e08d3a7078856d1f9ebfde15951 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:42:55.174905  299668 certs.go:382] copying /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/apiserver.crt.2f093449 -> /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/apiserver.crt
	I1119 02:42:55.174995  299668 certs.go:386] copying /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/apiserver.key.2f093449 -> /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/apiserver.key
	I1119 02:42:55.175062  299668 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/proxy-client.key
	I1119 02:42:55.175088  299668 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/proxy-client.crt with IP's: []
	I1119 02:42:55.677842  299668 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/proxy-client.crt ...
	I1119 02:42:55.677879  299668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/proxy-client.crt: {Name:mkecc3d139808fcfd56c1c505daef9b4314f266d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:42:55.678058  299668 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/proxy-client.key ...
	I1119 02:42:55.678074  299668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/proxy-client.key: {Name:mkfd946463670be5706400ebe2ff5e4540ed9b5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:42:55.678301  299668 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/14634.pem (1338 bytes)
	W1119 02:42:55.678346  299668 certs.go:480] ignoring /home/jenkins/minikube-integration/21924-11126/.minikube/certs/14634_empty.pem, impossibly tiny 0 bytes
	I1119 02:42:55.678360  299668 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca-key.pem (1671 bytes)
	I1119 02:42:55.678394  299668 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem (1082 bytes)
	I1119 02:42:55.678425  299668 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/cert.pem (1123 bytes)
	I1119 02:42:55.678472  299668 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/key.pem (1675 bytes)
	I1119 02:42:55.678534  299668 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/files/etc/ssl/certs/146342.pem (1708 bytes)
	I1119 02:42:55.679296  299668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 02:42:55.700801  299668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1119 02:42:55.720342  299668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 02:42:55.741236  299668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1119 02:42:55.764042  299668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1119 02:42:55.785834  299668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1671 bytes)
	I1119 02:42:55.807648  299668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 02:42:55.827008  299668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1119 02:42:55.845962  299668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 02:42:55.864695  299668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/certs/14634.pem --> /usr/share/ca-certificates/14634.pem (1338 bytes)
	I1119 02:42:55.881798  299668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/files/etc/ssl/certs/146342.pem --> /usr/share/ca-certificates/146342.pem (1708 bytes)
	I1119 02:42:55.898727  299668 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 02:42:55.910163  299668 ssh_runner.go:195] Run: openssl version
	I1119 02:42:55.915785  299668 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14634.pem && ln -fs /usr/share/ca-certificates/14634.pem /etc/ssl/certs/14634.pem"
	I1119 02:42:55.923580  299668 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14634.pem
	I1119 02:42:55.926945  299668 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 02:02 /usr/share/ca-certificates/14634.pem
	I1119 02:42:55.927022  299668 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14634.pem
	I1119 02:42:55.969227  299668 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14634.pem /etc/ssl/certs/51391683.0"
	I1119 02:42:55.978464  299668 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/146342.pem && ln -fs /usr/share/ca-certificates/146342.pem /etc/ssl/certs/146342.pem"
	I1119 02:42:55.988370  299668 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/146342.pem
	I1119 02:42:55.992980  299668 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 02:02 /usr/share/ca-certificates/146342.pem
	I1119 02:42:55.993028  299668 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/146342.pem
	I1119 02:42:56.051633  299668 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/146342.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 02:42:56.065808  299668 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 02:42:56.079199  299668 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:42:56.084981  299668 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 01:56 /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:42:56.085033  299668 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:42:56.140499  299668 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 02:42:56.151987  299668 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 02:42:56.156998  299668 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1119 02:42:56.157063  299668 kubeadm.go:401] StartCluster: {Name:no-preload-837474 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-837474 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:42:56.157164  299668 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 02:42:56.157224  299668 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 02:42:56.191409  299668 cri.go:89] found id: ""
	I1119 02:42:56.191487  299668 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 02:42:56.203572  299668 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1119 02:42:56.214503  299668 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1119 02:42:56.214560  299668 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1119 02:42:56.224485  299668 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1119 02:42:56.224520  299668 kubeadm.go:158] found existing configuration files:
	
	I1119 02:42:56.224563  299668 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1119 02:42:56.234337  299668 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1119 02:42:56.234389  299668 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1119 02:42:56.243718  299668 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1119 02:42:56.254141  299668 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1119 02:42:56.254192  299668 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1119 02:42:56.263696  299668 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1119 02:42:56.273116  299668 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1119 02:42:56.273160  299668 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1119 02:42:56.281275  299668 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1119 02:42:56.290803  299668 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1119 02:42:56.290848  299668 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1119 02:42:56.300377  299668 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1119 02:42:56.355983  299668 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1119 02:42:56.356057  299668 kubeadm.go:319] [preflight] Running pre-flight checks
	I1119 02:42:56.389799  299668 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1119 02:42:56.389890  299668 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1119 02:42:56.389940  299668 kubeadm.go:319] OS: Linux
	I1119 02:42:56.390011  299668 kubeadm.go:319] CGROUPS_CPU: enabled
	I1119 02:42:56.390069  299668 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1119 02:42:56.390131  299668 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1119 02:42:56.390190  299668 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1119 02:42:56.390253  299668 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1119 02:42:56.390334  299668 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1119 02:42:56.390396  299668 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1119 02:42:56.390484  299668 kubeadm.go:319] CGROUPS_IO: enabled
	I1119 02:42:56.476300  299668 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1119 02:42:56.476471  299668 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1119 02:42:56.476678  299668 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1119 02:42:56.498223  299668 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1119 02:42:55.001904  306860 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-167150 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 02:42:55.019551  306860 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1119 02:42:55.023819  306860 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 02:42:55.035169  306860 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-167150 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-167150 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 02:42:55.035294  306860 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 02:42:55.035349  306860 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 02:42:55.082998  306860 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 02:42:55.083033  306860 crio.go:433] Images already preloaded, skipping extraction
	I1119 02:42:55.083093  306860 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 02:42:55.133091  306860 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 02:42:55.133117  306860 cache_images.go:86] Images are preloaded, skipping loading
	I1119 02:42:55.133127  306860 kubeadm.go:935] updating node { 192.168.94.2 8444 v1.34.1 crio true true} ...
	I1119 02:42:55.133229  306860 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-167150 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-167150 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 02:42:55.133303  306860 ssh_runner.go:195] Run: crio config
	I1119 02:42:55.202350  306860 cni.go:84] Creating CNI manager for ""
	I1119 02:42:55.202422  306860 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 02:42:55.202527  306860 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 02:42:55.202583  306860 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-167150 NodeName:default-k8s-diff-port-167150 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 02:42:55.202750  306860 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-167150"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 02:42:55.202816  306860 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 02:42:55.212677  306860 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 02:42:55.212740  306860 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 02:42:55.222763  306860 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1119 02:42:55.238734  306860 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 02:42:55.263173  306860 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1119 02:42:55.284386  306860 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1119 02:42:55.294186  306860 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 02:42:55.309928  306860 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:42:55.457096  306860 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 02:42:55.486617  306860 certs.go:69] Setting up /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150 for IP: 192.168.94.2
	I1119 02:42:55.486643  306860 certs.go:195] generating shared ca certs ...
	I1119 02:42:55.486664  306860 certs.go:227] acquiring lock for ca certs: {Name:mkfa94aafd627ee4c1a185e05aa520339d3c22d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:42:55.486870  306860 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21924-11126/.minikube/ca.key
	I1119 02:42:55.486993  306860 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21924-11126/.minikube/proxy-client-ca.key
	I1119 02:42:55.487012  306860 certs.go:257] generating profile certs ...
	I1119 02:42:55.487088  306860 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/client.key
	I1119 02:42:55.487102  306860 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/client.crt with IP's: []
	I1119 02:42:56.094930  306860 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/client.crt ...
	I1119 02:42:56.094965  306860 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/client.crt: {Name:mk026804441dc7b69d5672d318a7041c3c66d037 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:42:56.095134  306860 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/client.key ...
	I1119 02:42:56.095149  306860 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/client.key: {Name:mk48f5330ed931b78c15c78cffd61daf6c38116c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:42:56.095247  306860 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/apiserver.key.c3ecd8f4
	I1119 02:42:56.095265  306860 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/apiserver.crt.c3ecd8f4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1119 02:42:56.225092  306860 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/apiserver.crt.c3ecd8f4 ...
	I1119 02:42:56.225159  306860 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/apiserver.crt.c3ecd8f4: {Name:mk96b6176b7d10d9bf2189cc1a892c03f023c6bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:42:56.225342  306860 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/apiserver.key.c3ecd8f4 ...
	I1119 02:42:56.225363  306860 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/apiserver.key.c3ecd8f4: {Name:mk1968f40809874a1e5baaa63347f3037839ec18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:42:56.225677  306860 certs.go:382] copying /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/apiserver.crt.c3ecd8f4 -> /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/apiserver.crt
	I1119 02:42:56.225860  306860 certs.go:386] copying /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/apiserver.key.c3ecd8f4 -> /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/apiserver.key
	I1119 02:42:56.226000  306860 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/proxy-client.key
	I1119 02:42:56.226018  306860 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/proxy-client.crt with IP's: []
	I1119 02:42:56.364736  306860 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/proxy-client.crt ...
	I1119 02:42:56.364766  306860 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/proxy-client.crt: {Name:mk250838ee0813d8a1018cfdbc728e6a6682cbe2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:42:56.364947  306860 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/proxy-client.key ...
	I1119 02:42:56.364966  306860 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/proxy-client.key: {Name:mkf8d3d5c9e799a5f275d845a37b4700ad82ae66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:42:56.365187  306860 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/14634.pem (1338 bytes)
	W1119 02:42:56.365235  306860 certs.go:480] ignoring /home/jenkins/minikube-integration/21924-11126/.minikube/certs/14634_empty.pem, impossibly tiny 0 bytes
	I1119 02:42:56.365250  306860 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca-key.pem (1671 bytes)
	I1119 02:42:56.365288  306860 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem (1082 bytes)
	I1119 02:42:56.365320  306860 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/cert.pem (1123 bytes)
	I1119 02:42:56.365352  306860 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/key.pem (1675 bytes)
	I1119 02:42:56.365408  306860 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/files/etc/ssl/certs/146342.pem (1708 bytes)
	I1119 02:42:56.365996  306860 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 02:42:56.390329  306860 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1119 02:42:56.417649  306860 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 02:42:56.439510  306860 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1119 02:42:56.464545  306860 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1119 02:42:56.495174  306860 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1119 02:42:56.522898  306860 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 02:42:56.545477  306860 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1119 02:42:56.569966  306860 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/certs/14634.pem --> /usr/share/ca-certificates/14634.pem (1338 bytes)
	I1119 02:42:56.596790  306860 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/files/etc/ssl/certs/146342.pem --> /usr/share/ca-certificates/146342.pem (1708 bytes)
	I1119 02:42:56.618988  306860 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 02:42:56.641382  306860 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 02:42:56.659625  306860 ssh_runner.go:195] Run: openssl version
	I1119 02:42:56.667985  306860 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/146342.pem && ln -fs /usr/share/ca-certificates/146342.pem /etc/ssl/certs/146342.pem"
	I1119 02:42:56.677102  306860 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/146342.pem
	I1119 02:42:56.680868  306860 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 02:02 /usr/share/ca-certificates/146342.pem
	I1119 02:42:56.680921  306860 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/146342.pem
	I1119 02:42:56.728253  306860 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/146342.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 02:42:56.738101  306860 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 02:42:56.748790  306860 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:42:56.753545  306860 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 01:56 /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:42:56.753606  306860 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:42:56.810205  306860 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 02:42:56.821949  306860 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14634.pem && ln -fs /usr/share/ca-certificates/14634.pem /etc/ssl/certs/14634.pem"
	I1119 02:42:56.833110  306860 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14634.pem
	I1119 02:42:56.838128  306860 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 02:02 /usr/share/ca-certificates/14634.pem
	I1119 02:42:56.838183  306860 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14634.pem
	I1119 02:42:56.891211  306860 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14634.pem /etc/ssl/certs/51391683.0"
	I1119 02:42:56.903114  306860 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 02:42:56.907959  306860 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1119 02:42:56.908012  306860 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-167150 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-167150 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:42:56.908102  306860 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 02:42:56.908149  306860 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 02:42:56.940502  306860 cri.go:89] found id: ""
	I1119 02:42:56.940561  306860 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 02:42:56.950549  306860 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1119 02:42:56.960914  306860 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1119 02:42:56.960969  306860 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1119 02:42:56.971164  306860 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1119 02:42:56.971180  306860 kubeadm.go:158] found existing configuration files:
	
	I1119 02:42:56.971221  306860 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1119 02:42:56.981206  306860 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1119 02:42:56.981266  306860 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1119 02:42:56.990677  306860 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1119 02:42:57.001004  306860 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1119 02:42:57.001054  306860 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1119 02:42:57.011142  306860 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1119 02:42:57.022773  306860 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1119 02:42:57.022824  306860 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1119 02:42:57.033930  306860 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1119 02:42:57.043496  306860 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1119 02:42:57.043549  306860 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1119 02:42:57.052850  306860 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1119 02:42:57.102312  306860 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1119 02:42:57.102384  306860 kubeadm.go:319] [preflight] Running pre-flight checks
	I1119 02:42:57.124619  306860 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1119 02:42:57.124731  306860 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1119 02:42:57.124806  306860 kubeadm.go:319] OS: Linux
	I1119 02:42:57.124877  306860 kubeadm.go:319] CGROUPS_CPU: enabled
	I1119 02:42:57.124940  306860 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1119 02:42:57.125010  306860 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1119 02:42:57.125075  306860 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1119 02:42:57.125121  306860 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1119 02:42:57.125176  306860 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1119 02:42:57.125246  306860 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1119 02:42:57.125304  306860 kubeadm.go:319] CGROUPS_IO: enabled
	I1119 02:42:57.195789  306860 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1119 02:42:57.195928  306860 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1119 02:42:57.196075  306860 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1119 02:42:57.203186  306860 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1119 02:42:56.313962  302848 out.go:252]   - Booting up control plane ...
	I1119 02:42:56.314089  302848 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1119 02:42:56.314233  302848 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1119 02:42:56.315640  302848 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1119 02:42:56.334919  302848 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1119 02:42:56.335093  302848 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1119 02:42:56.347888  302848 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1119 02:42:56.348202  302848 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1119 02:42:56.348467  302848 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1119 02:42:56.489302  302848 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1119 02:42:56.489520  302848 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1119 02:42:57.490788  302848 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001707522s
	I1119 02:42:57.494204  302848 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1119 02:42:57.494338  302848 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1119 02:42:57.494504  302848 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1119 02:42:57.494636  302848 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1119 02:42:56.501424  299668 out.go:252]   - Generating certificates and keys ...
	I1119 02:42:56.501541  299668 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1119 02:42:56.501670  299668 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1119 02:42:56.649197  299668 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1119 02:42:57.131296  299668 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1119 02:42:57.360417  299668 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1119 02:42:57.537498  299668 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1119 02:42:57.630421  299668 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1119 02:42:57.630669  299668 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-837474] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1119 02:42:57.690142  299668 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1119 02:42:57.692964  299668 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-837474] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1119 02:42:58.271962  299668 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1119 02:42:58.474942  299668 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1119 02:42:58.759980  299668 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1119 02:42:58.760242  299668 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1119 02:42:59.509507  299668 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1119 02:42:56.077921  291163 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-987573" context rescaled to 1 replicas
	W1119 02:42:57.695200  291163 node_ready.go:57] node "old-k8s-version-987573" has "Ready":"False" status (will retry)
	W1119 02:43:00.073408  291163 node_ready.go:57] node "old-k8s-version-987573" has "Ready":"False" status (will retry)
	I1119 02:42:59.510574  302848 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.016320605s
	I1119 02:43:00.061250  302848 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.566984077s
	I1119 02:43:00.995299  302848 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501086445s
	I1119 02:43:01.005851  302848 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1119 02:43:01.015707  302848 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1119 02:43:01.023229  302848 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1119 02:43:01.023570  302848 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-811173 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1119 02:43:01.031334  302848 kubeadm.go:319] [bootstrap-token] Using token: 7mjhrd.yzq9kll5v9huaptf
	I1119 02:43:00.399900  299668 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1119 02:43:01.316795  299668 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1119 02:43:01.487746  299668 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1119 02:43:01.585498  299668 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1119 02:43:01.586110  299668 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1119 02:43:01.590136  299668 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1119 02:42:57.204524  306860 out.go:252]   - Generating certificates and keys ...
	I1119 02:42:57.204623  306860 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1119 02:42:57.204687  306860 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1119 02:42:57.340602  306860 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1119 02:42:57.763784  306860 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1119 02:42:58.132475  306860 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1119 02:42:58.496067  306860 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1119 02:42:59.065287  306860 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1119 02:42:59.065574  306860 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-167150 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1119 02:42:59.997463  306860 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1119 02:42:59.997634  306860 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-167150 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1119 02:43:00.551535  306860 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1119 02:43:00.590706  306860 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1119 02:43:00.670505  306860 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1119 02:43:00.670748  306860 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1119 02:43:00.836954  306860 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1119 02:43:00.975878  306860 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1119 02:43:01.234661  306860 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1119 02:43:01.776990  306860 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1119 02:43:01.935581  306860 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1119 02:43:01.936081  306860 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1119 02:43:01.939514  306860 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1119 02:43:01.942389  306860 out.go:252]   - Booting up control plane ...
	I1119 02:43:01.942532  306860 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1119 02:43:01.942649  306860 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1119 02:43:01.942759  306860 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1119 02:43:01.957695  306860 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1119 02:43:01.957851  306860 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1119 02:43:01.964809  306860 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1119 02:43:01.966421  306860 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1119 02:43:01.966510  306860 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1119 02:43:02.081897  306860 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1119 02:43:02.082048  306860 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1119 02:43:01.032638  302848 out.go:252]   - Configuring RBAC rules ...
	I1119 02:43:01.032800  302848 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1119 02:43:01.035624  302848 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1119 02:43:01.040457  302848 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1119 02:43:01.043182  302848 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1119 02:43:01.045472  302848 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1119 02:43:01.048002  302848 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1119 02:43:01.401444  302848 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1119 02:43:01.820457  302848 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1119 02:43:02.401502  302848 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1119 02:43:02.402643  302848 kubeadm.go:319] 
	I1119 02:43:02.402737  302848 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1119 02:43:02.402774  302848 kubeadm.go:319] 
	I1119 02:43:02.402905  302848 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1119 02:43:02.402932  302848 kubeadm.go:319] 
	I1119 02:43:02.402964  302848 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1119 02:43:02.403044  302848 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1119 02:43:02.403131  302848 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1119 02:43:02.403146  302848 kubeadm.go:319] 
	I1119 02:43:02.403216  302848 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1119 02:43:02.403225  302848 kubeadm.go:319] 
	I1119 02:43:02.403289  302848 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1119 02:43:02.403297  302848 kubeadm.go:319] 
	I1119 02:43:02.403367  302848 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1119 02:43:02.403490  302848 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1119 02:43:02.403605  302848 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1119 02:43:02.403613  302848 kubeadm.go:319] 
	I1119 02:43:02.403712  302848 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1119 02:43:02.403838  302848 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1119 02:43:02.403855  302848 kubeadm.go:319] 
	I1119 02:43:02.403968  302848 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 7mjhrd.yzq9kll5v9huaptf \
	I1119 02:43:02.404116  302848 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4165d7789cb5b41e2fdbecffedd9aece71a7c390b7f673c978e979d9f1b4ab32 \
	I1119 02:43:02.404149  302848 kubeadm.go:319] 	--control-plane 
	I1119 02:43:02.404153  302848 kubeadm.go:319] 
	I1119 02:43:02.404265  302848 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1119 02:43:02.404277  302848 kubeadm.go:319] 
	I1119 02:43:02.404388  302848 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 7mjhrd.yzq9kll5v9huaptf \
	I1119 02:43:02.404566  302848 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4165d7789cb5b41e2fdbecffedd9aece71a7c390b7f673c978e979d9f1b4ab32 
	I1119 02:43:02.407773  302848 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1119 02:43:02.407946  302848 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1119 02:43:02.407964  302848 cni.go:84] Creating CNI manager for ""
	I1119 02:43:02.407972  302848 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 02:43:02.410242  302848 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1119 02:43:02.411389  302848 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1119 02:43:02.416029  302848 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1119 02:43:02.416045  302848 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1119 02:43:02.434391  302848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1119 02:43:02.635779  302848 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1119 02:43:02.635869  302848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:43:02.635895  302848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-811173 minikube.k8s.io/updated_at=2025_11_19T02_43_02_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=ebd49efe8b7d44f89fc96e21beffcd64dfee8277 minikube.k8s.io/name=embed-certs-811173 minikube.k8s.io/primary=true
	I1119 02:43:02.646141  302848 ops.go:34] apiserver oom_adj: -16
	I1119 02:43:02.701476  302848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:43:03.201546  302848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:43:03.701526  302848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:43:01.593467  299668 out.go:252]   - Booting up control plane ...
	I1119 02:43:01.593615  299668 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1119 02:43:01.593731  299668 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1119 02:43:01.593821  299668 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1119 02:43:01.609953  299668 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1119 02:43:01.610136  299668 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1119 02:43:01.617306  299668 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1119 02:43:01.617705  299668 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1119 02:43:01.617773  299668 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1119 02:43:01.745744  299668 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1119 02:43:01.745917  299668 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1119 02:43:02.749850  299668 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.00218898s
	I1119 02:43:02.753994  299668 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1119 02:43:02.754137  299668 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1119 02:43:02.754320  299668 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1119 02:43:02.754458  299668 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1119 02:43:04.243363  299668 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.489187962s
	I1119 02:43:05.042174  299668 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.288115678s
	W1119 02:43:02.073659  291163 node_ready.go:57] node "old-k8s-version-987573" has "Ready":"False" status (will retry)
	W1119 02:43:04.075000  291163 node_ready.go:57] node "old-k8s-version-987573" has "Ready":"False" status (will retry)
	I1119 02:43:06.755785  299668 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.00176291s
	I1119 02:43:06.768184  299668 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1119 02:43:06.778618  299668 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1119 02:43:06.786476  299668 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1119 02:43:06.786680  299668 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-837474 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1119 02:43:06.793671  299668 kubeadm.go:319] [bootstrap-token] Using token: 9fycjj.9ujoqc3x92l2ibft
	I1119 02:43:02.583638  306860 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.872402ms
	I1119 02:43:02.588260  306860 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1119 02:43:02.588375  306860 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8444/livez
	I1119 02:43:02.588528  306860 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1119 02:43:02.588631  306860 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1119 02:43:04.140696  306860 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.552291183s
	I1119 02:43:05.150994  306860 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.562670686s
	I1119 02:43:07.089548  306860 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.501158948s
	I1119 02:43:07.101719  306860 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1119 02:43:07.110570  306860 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1119 02:43:07.118309  306860 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1119 02:43:07.118633  306860 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-167150 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1119 02:43:07.128002  306860 kubeadm.go:319] [bootstrap-token] Using token: waagng.bgqyeddkg8xbkifv
	I1119 02:43:07.129465  306860 out.go:252]   - Configuring RBAC rules ...
	I1119 02:43:07.129641  306860 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1119 02:43:07.132357  306860 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1119 02:43:07.138676  306860 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1119 02:43:07.142447  306860 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1119 02:43:07.143596  306860 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1119 02:43:07.145985  306860 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1119 02:43:04.202036  302848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:43:04.702118  302848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:43:05.201577  302848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:43:05.702195  302848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:43:06.202066  302848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:43:06.701602  302848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:43:07.202550  302848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:43:07.292809  302848 kubeadm.go:1114] duration metric: took 4.657004001s to wait for elevateKubeSystemPrivileges
	I1119 02:43:07.292851  302848 kubeadm.go:403] duration metric: took 15.846648283s to StartCluster
	I1119 02:43:07.292874  302848 settings.go:142] acquiring lock: {Name:mk39bd61177f2f8808b442f427cccc3032092975 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:43:07.292952  302848 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21924-11126/kubeconfig
	I1119 02:43:07.294786  302848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/kubeconfig: {Name:mkb0d0ec188d51add3fa67ae624c9c11581068d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:43:07.295068  302848 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 02:43:07.295192  302848 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 02:43:07.295259  302848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1119 02:43:07.295275  302848 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-811173"
	I1119 02:43:07.295295  302848 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-811173"
	I1119 02:43:07.295325  302848 host.go:66] Checking if "embed-certs-811173" exists ...
	I1119 02:43:07.295866  302848 addons.go:70] Setting default-storageclass=true in profile "embed-certs-811173"
	I1119 02:43:07.295887  302848 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-811173"
	I1119 02:43:07.295930  302848 cli_runner.go:164] Run: docker container inspect embed-certs-811173 --format={{.State.Status}}
	I1119 02:43:07.296292  302848 config.go:182] Loaded profile config "embed-certs-811173": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:43:07.296344  302848 cli_runner.go:164] Run: docker container inspect embed-certs-811173 --format={{.State.Status}}
	I1119 02:43:07.297705  302848 out.go:179] * Verifying Kubernetes components...
	I1119 02:43:07.299117  302848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:43:07.331934  302848 addons.go:239] Setting addon default-storageclass=true in "embed-certs-811173"
	I1119 02:43:07.331974  302848 host.go:66] Checking if "embed-certs-811173" exists ...
	I1119 02:43:07.332295  302848 cli_runner.go:164] Run: docker container inspect embed-certs-811173 --format={{.State.Status}}
	I1119 02:43:07.332844  302848 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 02:43:07.334167  302848 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 02:43:07.334188  302848 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 02:43:07.334241  302848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-811173
	I1119 02:43:07.362524  302848 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 02:43:07.362762  302848 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 02:43:07.362850  302848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-811173
	I1119 02:43:07.364663  302848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/embed-certs-811173/id_rsa Username:docker}
	I1119 02:43:07.388411  302848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/embed-certs-811173/id_rsa Username:docker}
	I1119 02:43:07.411165  302848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1119 02:43:07.483920  302848 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 02:43:07.503288  302848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 02:43:07.513295  302848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 02:43:07.651779  302848 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1119 02:43:07.654104  302848 node_ready.go:35] waiting up to 6m0s for node "embed-certs-811173" to be "Ready" ...
	I1119 02:43:07.881305  302848 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1119 02:43:06.795001  299668 out.go:252]   - Configuring RBAC rules ...
	I1119 02:43:06.795151  299668 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1119 02:43:06.797762  299668 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1119 02:43:06.802768  299668 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1119 02:43:06.805038  299668 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1119 02:43:06.807078  299668 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1119 02:43:06.809131  299668 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1119 02:43:07.162003  299668 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1119 02:43:07.591067  299668 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1119 02:43:08.161713  299668 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1119 02:43:08.162667  299668 kubeadm.go:319] 
	I1119 02:43:08.162773  299668 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1119 02:43:08.162792  299668 kubeadm.go:319] 
	I1119 02:43:08.162919  299668 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1119 02:43:08.162929  299668 kubeadm.go:319] 
	I1119 02:43:08.162968  299668 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1119 02:43:08.163054  299668 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1119 02:43:08.163127  299668 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1119 02:43:08.163135  299668 kubeadm.go:319] 
	I1119 02:43:08.163218  299668 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1119 02:43:08.163232  299668 kubeadm.go:319] 
	I1119 02:43:08.163270  299668 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1119 02:43:08.163276  299668 kubeadm.go:319] 
	I1119 02:43:08.163318  299668 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1119 02:43:08.163382  299668 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1119 02:43:08.163483  299668 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1119 02:43:08.163500  299668 kubeadm.go:319] 
	I1119 02:43:08.163615  299668 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1119 02:43:08.163733  299668 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1119 02:43:08.163746  299668 kubeadm.go:319] 
	I1119 02:43:08.163885  299668 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 9fycjj.9ujoqc3x92l2ibft \
	I1119 02:43:08.164006  299668 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4165d7789cb5b41e2fdbecffedd9aece71a7c390b7f673c978e979d9f1b4ab32 \
	I1119 02:43:08.164041  299668 kubeadm.go:319] 	--control-plane 
	I1119 02:43:08.164050  299668 kubeadm.go:319] 
	I1119 02:43:08.164194  299668 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1119 02:43:08.164206  299668 kubeadm.go:319] 
	I1119 02:43:08.164311  299668 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 9fycjj.9ujoqc3x92l2ibft \
	I1119 02:43:08.164401  299668 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4165d7789cb5b41e2fdbecffedd9aece71a7c390b7f673c978e979d9f1b4ab32 
	I1119 02:43:08.166559  299668 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1119 02:43:08.166685  299668 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1119 02:43:08.166716  299668 cni.go:84] Creating CNI manager for ""
	I1119 02:43:08.166726  299668 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 02:43:08.169105  299668 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1119 02:43:07.495981  306860 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1119 02:43:07.914284  306860 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1119 02:43:08.495511  306860 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1119 02:43:08.496414  306860 kubeadm.go:319] 
	I1119 02:43:08.496519  306860 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1119 02:43:08.496532  306860 kubeadm.go:319] 
	I1119 02:43:08.496630  306860 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1119 02:43:08.496640  306860 kubeadm.go:319] 
	I1119 02:43:08.496692  306860 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1119 02:43:08.496819  306860 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1119 02:43:08.496900  306860 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1119 02:43:08.496910  306860 kubeadm.go:319] 
	I1119 02:43:08.497001  306860 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1119 02:43:08.497011  306860 kubeadm.go:319] 
	I1119 02:43:08.497081  306860 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1119 02:43:08.497091  306860 kubeadm.go:319] 
	I1119 02:43:08.497172  306860 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1119 02:43:08.497303  306860 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1119 02:43:08.497404  306860 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1119 02:43:08.497414  306860 kubeadm.go:319] 
	I1119 02:43:08.497561  306860 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1119 02:43:08.497664  306860 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1119 02:43:08.497674  306860 kubeadm.go:319] 
	I1119 02:43:08.497789  306860 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token waagng.bgqyeddkg8xbkifv \
	I1119 02:43:08.497949  306860 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4165d7789cb5b41e2fdbecffedd9aece71a7c390b7f673c978e979d9f1b4ab32 \
	I1119 02:43:08.497979  306860 kubeadm.go:319] 	--control-plane 
	I1119 02:43:08.497987  306860 kubeadm.go:319] 
	I1119 02:43:08.498113  306860 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1119 02:43:08.498121  306860 kubeadm.go:319] 
	I1119 02:43:08.498211  306860 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token waagng.bgqyeddkg8xbkifv \
	I1119 02:43:08.498313  306860 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4165d7789cb5b41e2fdbecffedd9aece71a7c390b7f673c978e979d9f1b4ab32 
	I1119 02:43:08.500938  306860 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1119 02:43:08.501038  306860 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1119 02:43:08.501062  306860 cni.go:84] Creating CNI manager for ""
	I1119 02:43:08.501071  306860 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 02:43:08.502415  306860 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1119 02:43:07.882405  302848 addons.go:515] duration metric: took 587.224612ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1119 02:43:08.155743  302848 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-811173" context rescaled to 1 replicas
	I1119 02:43:08.170011  299668 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1119 02:43:08.174308  299668 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1119 02:43:08.174323  299668 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1119 02:43:08.187641  299668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1119 02:43:08.394639  299668 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1119 02:43:08.394749  299668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:43:08.394806  299668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-837474 minikube.k8s.io/updated_at=2025_11_19T02_43_08_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=ebd49efe8b7d44f89fc96e21beffcd64dfee8277 minikube.k8s.io/name=no-preload-837474 minikube.k8s.io/primary=true
	I1119 02:43:08.404680  299668 ops.go:34] apiserver oom_adj: -16
	I1119 02:43:08.461588  299668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:43:08.962254  299668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:43:09.461759  299668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:43:09.961662  299668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1119 02:43:06.573722  291163 node_ready.go:57] node "old-k8s-version-987573" has "Ready":"False" status (will retry)
	I1119 02:43:08.072741  291163 node_ready.go:49] node "old-k8s-version-987573" is "Ready"
	I1119 02:43:08.072770  291163 node_ready.go:38] duration metric: took 12.502973194s for node "old-k8s-version-987573" to be "Ready" ...
	I1119 02:43:08.072782  291163 api_server.go:52] waiting for apiserver process to appear ...
	I1119 02:43:08.072824  291163 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 02:43:08.085646  291163 api_server.go:72] duration metric: took 13.03179653s to wait for apiserver process to appear ...
	I1119 02:43:08.085675  291163 api_server.go:88] waiting for apiserver healthz status ...
	I1119 02:43:08.085696  291163 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 02:43:08.090892  291163 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1119 02:43:08.091918  291163 api_server.go:141] control plane version: v1.28.0
	I1119 02:43:08.091942  291163 api_server.go:131] duration metric: took 6.259879ms to wait for apiserver health ...
	I1119 02:43:08.091952  291163 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 02:43:08.095373  291163 system_pods.go:59] 8 kube-system pods found
	I1119 02:43:08.095414  291163 system_pods.go:61] "coredns-5dd5756b68-djd8r" [38b8c793-304e-42c1-b2a0-ecd1032a5962] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:43:08.095426  291163 system_pods.go:61] "etcd-old-k8s-version-987573" [f7ae9d5c-07c4-402e-97ea-a50375ebc9c3] Running
	I1119 02:43:08.095449  291163 system_pods.go:61] "kindnet-57t4v" [0db2f280-bd80-4848-b27d-5419aa484d18] Running
	I1119 02:43:08.095455  291163 system_pods.go:61] "kube-apiserver-old-k8s-version-987573" [8da95707-6a07-4f17-a00c-63ec9cbe294d] Running
	I1119 02:43:08.095461  291163 system_pods.go:61] "kube-controller-manager-old-k8s-version-987573" [ea4139c7-6ced-429f-9eb9-9b2897fd679e] Running
	I1119 02:43:08.095466  291163 system_pods.go:61] "kube-proxy-tmqhk" [ef6bd301-05f1-4196-99a7-73e8ff59dc4b] Running
	I1119 02:43:08.095471  291163 system_pods.go:61] "kube-scheduler-old-k8s-version-987573" [85535e86-1e2a-4131-baec-893b97ce32e6] Running
	I1119 02:43:08.095478  291163 system_pods.go:61] "storage-provisioner" [abe94ba2-07c5-4f03-ab28-00ea277fdc56] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 02:43:08.095487  291163 system_pods.go:74] duration metric: took 3.527954ms to wait for pod list to return data ...
	I1119 02:43:08.095497  291163 default_sa.go:34] waiting for default service account to be created ...
	I1119 02:43:08.097407  291163 default_sa.go:45] found service account: "default"
	I1119 02:43:08.097424  291163 default_sa.go:55] duration metric: took 1.918195ms for default service account to be created ...
	I1119 02:43:08.097462  291163 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 02:43:08.100635  291163 system_pods.go:86] 8 kube-system pods found
	I1119 02:43:08.100659  291163 system_pods.go:89] "coredns-5dd5756b68-djd8r" [38b8c793-304e-42c1-b2a0-ecd1032a5962] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:43:08.100665  291163 system_pods.go:89] "etcd-old-k8s-version-987573" [f7ae9d5c-07c4-402e-97ea-a50375ebc9c3] Running
	I1119 02:43:08.100671  291163 system_pods.go:89] "kindnet-57t4v" [0db2f280-bd80-4848-b27d-5419aa484d18] Running
	I1119 02:43:08.100675  291163 system_pods.go:89] "kube-apiserver-old-k8s-version-987573" [8da95707-6a07-4f17-a00c-63ec9cbe294d] Running
	I1119 02:43:08.100681  291163 system_pods.go:89] "kube-controller-manager-old-k8s-version-987573" [ea4139c7-6ced-429f-9eb9-9b2897fd679e] Running
	I1119 02:43:08.100686  291163 system_pods.go:89] "kube-proxy-tmqhk" [ef6bd301-05f1-4196-99a7-73e8ff59dc4b] Running
	I1119 02:43:08.100696  291163 system_pods.go:89] "kube-scheduler-old-k8s-version-987573" [85535e86-1e2a-4131-baec-893b97ce32e6] Running
	I1119 02:43:08.100704  291163 system_pods.go:89] "storage-provisioner" [abe94ba2-07c5-4f03-ab28-00ea277fdc56] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 02:43:08.100731  291163 retry.go:31] will retry after 255.615466ms: missing components: kube-dns
	I1119 02:43:08.360951  291163 system_pods.go:86] 8 kube-system pods found
	I1119 02:43:08.360990  291163 system_pods.go:89] "coredns-5dd5756b68-djd8r" [38b8c793-304e-42c1-b2a0-ecd1032a5962] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:43:08.360999  291163 system_pods.go:89] "etcd-old-k8s-version-987573" [f7ae9d5c-07c4-402e-97ea-a50375ebc9c3] Running
	I1119 02:43:08.361007  291163 system_pods.go:89] "kindnet-57t4v" [0db2f280-bd80-4848-b27d-5419aa484d18] Running
	I1119 02:43:08.361012  291163 system_pods.go:89] "kube-apiserver-old-k8s-version-987573" [8da95707-6a07-4f17-a00c-63ec9cbe294d] Running
	I1119 02:43:08.361017  291163 system_pods.go:89] "kube-controller-manager-old-k8s-version-987573" [ea4139c7-6ced-429f-9eb9-9b2897fd679e] Running
	I1119 02:43:08.361022  291163 system_pods.go:89] "kube-proxy-tmqhk" [ef6bd301-05f1-4196-99a7-73e8ff59dc4b] Running
	I1119 02:43:08.361027  291163 system_pods.go:89] "kube-scheduler-old-k8s-version-987573" [85535e86-1e2a-4131-baec-893b97ce32e6] Running
	I1119 02:43:08.361034  291163 system_pods.go:89] "storage-provisioner" [abe94ba2-07c5-4f03-ab28-00ea277fdc56] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 02:43:08.361058  291163 retry.go:31] will retry after 283.051609ms: missing components: kube-dns
	I1119 02:43:08.649105  291163 system_pods.go:86] 8 kube-system pods found
	I1119 02:43:08.649146  291163 system_pods.go:89] "coredns-5dd5756b68-djd8r" [38b8c793-304e-42c1-b2a0-ecd1032a5962] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:43:08.649155  291163 system_pods.go:89] "etcd-old-k8s-version-987573" [f7ae9d5c-07c4-402e-97ea-a50375ebc9c3] Running
	I1119 02:43:08.649163  291163 system_pods.go:89] "kindnet-57t4v" [0db2f280-bd80-4848-b27d-5419aa484d18] Running
	I1119 02:43:08.649177  291163 system_pods.go:89] "kube-apiserver-old-k8s-version-987573" [8da95707-6a07-4f17-a00c-63ec9cbe294d] Running
	I1119 02:43:08.649183  291163 system_pods.go:89] "kube-controller-manager-old-k8s-version-987573" [ea4139c7-6ced-429f-9eb9-9b2897fd679e] Running
	I1119 02:43:08.649189  291163 system_pods.go:89] "kube-proxy-tmqhk" [ef6bd301-05f1-4196-99a7-73e8ff59dc4b] Running
	I1119 02:43:08.649194  291163 system_pods.go:89] "kube-scheduler-old-k8s-version-987573" [85535e86-1e2a-4131-baec-893b97ce32e6] Running
	I1119 02:43:08.649201  291163 system_pods.go:89] "storage-provisioner" [abe94ba2-07c5-4f03-ab28-00ea277fdc56] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 02:43:08.649222  291163 retry.go:31] will retry after 437.362391ms: missing components: kube-dns
	I1119 02:43:09.091273  291163 system_pods.go:86] 8 kube-system pods found
	I1119 02:43:09.091310  291163 system_pods.go:89] "coredns-5dd5756b68-djd8r" [38b8c793-304e-42c1-b2a0-ecd1032a5962] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:43:09.091322  291163 system_pods.go:89] "etcd-old-k8s-version-987573" [f7ae9d5c-07c4-402e-97ea-a50375ebc9c3] Running
	I1119 02:43:09.091328  291163 system_pods.go:89] "kindnet-57t4v" [0db2f280-bd80-4848-b27d-5419aa484d18] Running
	I1119 02:43:09.091332  291163 system_pods.go:89] "kube-apiserver-old-k8s-version-987573" [8da95707-6a07-4f17-a00c-63ec9cbe294d] Running
	I1119 02:43:09.091336  291163 system_pods.go:89] "kube-controller-manager-old-k8s-version-987573" [ea4139c7-6ced-429f-9eb9-9b2897fd679e] Running
	I1119 02:43:09.091339  291163 system_pods.go:89] "kube-proxy-tmqhk" [ef6bd301-05f1-4196-99a7-73e8ff59dc4b] Running
	I1119 02:43:09.091342  291163 system_pods.go:89] "kube-scheduler-old-k8s-version-987573" [85535e86-1e2a-4131-baec-893b97ce32e6] Running
	I1119 02:43:09.091347  291163 system_pods.go:89] "storage-provisioner" [abe94ba2-07c5-4f03-ab28-00ea277fdc56] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 02:43:09.091360  291163 retry.go:31] will retry after 557.694848ms: missing components: kube-dns
	I1119 02:43:09.654831  291163 system_pods.go:86] 8 kube-system pods found
	I1119 02:43:09.654864  291163 system_pods.go:89] "coredns-5dd5756b68-djd8r" [38b8c793-304e-42c1-b2a0-ecd1032a5962] Running
	I1119 02:43:09.654874  291163 system_pods.go:89] "etcd-old-k8s-version-987573" [f7ae9d5c-07c4-402e-97ea-a50375ebc9c3] Running
	I1119 02:43:09.654880  291163 system_pods.go:89] "kindnet-57t4v" [0db2f280-bd80-4848-b27d-5419aa484d18] Running
	I1119 02:43:09.654887  291163 system_pods.go:89] "kube-apiserver-old-k8s-version-987573" [8da95707-6a07-4f17-a00c-63ec9cbe294d] Running
	I1119 02:43:09.654892  291163 system_pods.go:89] "kube-controller-manager-old-k8s-version-987573" [ea4139c7-6ced-429f-9eb9-9b2897fd679e] Running
	I1119 02:43:09.654897  291163 system_pods.go:89] "kube-proxy-tmqhk" [ef6bd301-05f1-4196-99a7-73e8ff59dc4b] Running
	I1119 02:43:09.654902  291163 system_pods.go:89] "kube-scheduler-old-k8s-version-987573" [85535e86-1e2a-4131-baec-893b97ce32e6] Running
	I1119 02:43:09.654907  291163 system_pods.go:89] "storage-provisioner" [abe94ba2-07c5-4f03-ab28-00ea277fdc56] Running
	I1119 02:43:09.654917  291163 system_pods.go:126] duration metric: took 1.55744718s to wait for k8s-apps to be running ...
	I1119 02:43:09.654931  291163 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 02:43:09.654989  291163 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 02:43:09.668526  291163 system_svc.go:56] duration metric: took 13.587992ms WaitForService to wait for kubelet
	I1119 02:43:09.668557  291163 kubeadm.go:587] duration metric: took 14.614710886s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 02:43:09.668577  291163 node_conditions.go:102] verifying NodePressure condition ...
	I1119 02:43:09.671058  291163 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1119 02:43:09.671080  291163 node_conditions.go:123] node cpu capacity is 8
	I1119 02:43:09.671094  291163 node_conditions.go:105] duration metric: took 2.511044ms to run NodePressure ...
	I1119 02:43:09.671108  291163 start.go:242] waiting for startup goroutines ...
	I1119 02:43:09.671122  291163 start.go:247] waiting for cluster config update ...
	I1119 02:43:09.671138  291163 start.go:256] writing updated cluster config ...
	I1119 02:43:09.671426  291163 ssh_runner.go:195] Run: rm -f paused
	I1119 02:43:09.675339  291163 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 02:43:09.679685  291163 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-djd8r" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:09.683447  291163 pod_ready.go:94] pod "coredns-5dd5756b68-djd8r" is "Ready"
	I1119 02:43:09.683468  291163 pod_ready.go:86] duration metric: took 3.760218ms for pod "coredns-5dd5756b68-djd8r" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:09.686154  291163 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-987573" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:09.690031  291163 pod_ready.go:94] pod "etcd-old-k8s-version-987573" is "Ready"
	I1119 02:43:09.690049  291163 pod_ready.go:86] duration metric: took 3.878026ms for pod "etcd-old-k8s-version-987573" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:09.692504  291163 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-987573" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:09.695894  291163 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-987573" is "Ready"
	I1119 02:43:09.695913  291163 pod_ready.go:86] duration metric: took 3.39096ms for pod "kube-apiserver-old-k8s-version-987573" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:09.700042  291163 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-987573" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:10.080305  291163 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-987573" is "Ready"
	I1119 02:43:10.080330  291163 pod_ready.go:86] duration metric: took 380.2693ms for pod "kube-controller-manager-old-k8s-version-987573" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:10.279834  291163 pod_ready.go:83] waiting for pod "kube-proxy-tmqhk" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:10.679358  291163 pod_ready.go:94] pod "kube-proxy-tmqhk" is "Ready"
	I1119 02:43:10.679390  291163 pod_ready.go:86] duration metric: took 399.530656ms for pod "kube-proxy-tmqhk" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:10.880413  291163 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-987573" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:11.279416  291163 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-987573" is "Ready"
	I1119 02:43:11.279469  291163 pod_ready.go:86] duration metric: took 399.023354ms for pod "kube-scheduler-old-k8s-version-987573" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:11.279484  291163 pod_ready.go:40] duration metric: took 1.604115977s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 02:43:11.320952  291163 start.go:628] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1119 02:43:11.322818  291163 out.go:203] 
	W1119 02:43:11.324015  291163 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1119 02:43:11.325253  291163 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1119 02:43:11.326753  291163 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-987573" cluster and "default" namespace by default
	I1119 02:43:08.503687  306860 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1119 02:43:08.508285  306860 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1119 02:43:08.508302  306860 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1119 02:43:08.523707  306860 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1119 02:43:08.769348  306860 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1119 02:43:08.769426  306860 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:43:08.769484  306860 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-167150 minikube.k8s.io/updated_at=2025_11_19T02_43_08_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=ebd49efe8b7d44f89fc96e21beffcd64dfee8277 minikube.k8s.io/name=default-k8s-diff-port-167150 minikube.k8s.io/primary=true
	I1119 02:43:08.779644  306860 ops.go:34] apiserver oom_adj: -16
	I1119 02:43:08.864308  306860 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:43:09.364395  306860 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:43:09.865330  306860 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:43:10.364616  306860 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:43:10.864703  306860 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:43:11.364553  306860 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:43:11.864420  306860 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:43:12.365307  306860 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:43:12.440810  306860 kubeadm.go:1114] duration metric: took 3.671440647s to wait for elevateKubeSystemPrivileges
	I1119 02:43:12.440859  306860 kubeadm.go:403] duration metric: took 15.532850823s to StartCluster
	I1119 02:43:12.440882  306860 settings.go:142] acquiring lock: {Name:mk39bd61177f2f8808b442f427cccc3032092975 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:43:12.440962  306860 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21924-11126/kubeconfig
	I1119 02:43:12.443128  306860 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/kubeconfig: {Name:mkb0d0ec188d51add3fa67ae624c9c11581068d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:43:12.443390  306860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1119 02:43:12.443402  306860 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 02:43:12.443617  306860 config.go:182] Loaded profile config "default-k8s-diff-port-167150": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:43:12.443467  306860 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 02:43:12.443670  306860 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-167150"
	I1119 02:43:12.443679  306860 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-167150"
	I1119 02:43:12.443697  306860 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-167150"
	I1119 02:43:12.443697  306860 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-167150"
	I1119 02:43:12.443736  306860 host.go:66] Checking if "default-k8s-diff-port-167150" exists ...
	I1119 02:43:12.444076  306860 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-167150 --format={{.State.Status}}
	I1119 02:43:12.444253  306860 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-167150 --format={{.State.Status}}
	I1119 02:43:12.446396  306860 out.go:179] * Verifying Kubernetes components...
	I1119 02:43:12.447600  306860 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:43:12.470366  306860 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 02:43:12.471033  306860 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-167150"
	I1119 02:43:12.471078  306860 host.go:66] Checking if "default-k8s-diff-port-167150" exists ...
	I1119 02:43:12.471574  306860 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-167150 --format={{.State.Status}}
	I1119 02:43:12.472766  306860 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 02:43:12.472818  306860 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 02:43:12.472877  306860 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-167150
	I1119 02:43:12.503314  306860 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/default-k8s-diff-port-167150/id_rsa Username:docker}
	I1119 02:43:12.503591  306860 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 02:43:12.503615  306860 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 02:43:12.503672  306860 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-167150
	I1119 02:43:12.534100  306860 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/default-k8s-diff-port-167150/id_rsa Username:docker}
	I1119 02:43:12.556628  306860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1119 02:43:12.606106  306860 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 02:43:12.623922  306860 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 02:43:12.650781  306860 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 02:43:12.727240  306860 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1119 02:43:12.728708  306860 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-167150" to be "Ready" ...
	I1119 02:43:12.921283  306860 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1119 02:43:10.461847  299668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:43:10.962221  299668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:43:11.462998  299668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:43:11.962639  299668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:43:12.462654  299668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:43:12.962592  299668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:43:13.462281  299668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:43:13.526012  299668 kubeadm.go:1114] duration metric: took 5.131316482s to wait for elevateKubeSystemPrivileges
	I1119 02:43:13.526050  299668 kubeadm.go:403] duration metric: took 17.368991046s to StartCluster
	I1119 02:43:13.526070  299668 settings.go:142] acquiring lock: {Name:mk39bd61177f2f8808b442f427cccc3032092975 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:43:13.526144  299668 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21924-11126/kubeconfig
	I1119 02:43:13.528869  299668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/kubeconfig: {Name:mkb0d0ec188d51add3fa67ae624c9c11581068d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:43:13.529152  299668 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1119 02:43:13.529178  299668 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 02:43:13.529221  299668 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 02:43:13.529318  299668 addons.go:70] Setting storage-provisioner=true in profile "no-preload-837474"
	I1119 02:43:13.529340  299668 addons.go:239] Setting addon storage-provisioner=true in "no-preload-837474"
	I1119 02:43:13.529340  299668 addons.go:70] Setting default-storageclass=true in profile "no-preload-837474"
	I1119 02:43:13.529365  299668 config.go:182] Loaded profile config "no-preload-837474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:43:13.529370  299668 host.go:66] Checking if "no-preload-837474" exists ...
	I1119 02:43:13.529375  299668 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-837474"
	I1119 02:43:13.529859  299668 cli_runner.go:164] Run: docker container inspect no-preload-837474 --format={{.State.Status}}
	I1119 02:43:13.530016  299668 cli_runner.go:164] Run: docker container inspect no-preload-837474 --format={{.State.Status}}
	I1119 02:43:13.530719  299668 out.go:179] * Verifying Kubernetes components...
	I1119 02:43:13.531956  299668 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:43:13.553148  299668 addons.go:239] Setting addon default-storageclass=true in "no-preload-837474"
	I1119 02:43:13.553192  299668 host.go:66] Checking if "no-preload-837474" exists ...
	I1119 02:43:13.553734  299668 cli_runner.go:164] Run: docker container inspect no-preload-837474 --format={{.State.Status}}
	I1119 02:43:13.555218  299668 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 02:43:13.556409  299668 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 02:43:13.556465  299668 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 02:43:13.556515  299668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-837474
	I1119 02:43:13.581067  299668 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 02:43:13.581088  299668 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 02:43:13.581147  299668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-837474
	I1119 02:43:13.587309  299668 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/no-preload-837474/id_rsa Username:docker}
	I1119 02:43:13.603773  299668 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/no-preload-837474/id_rsa Username:docker}
	I1119 02:43:13.616042  299668 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1119 02:43:13.662733  299668 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 02:43:13.696898  299668 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 02:43:13.712155  299668 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 02:43:13.803707  299668 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1119 02:43:13.805528  299668 node_ready.go:35] waiting up to 6m0s for node "no-preload-837474" to be "Ready" ...
	I1119 02:43:14.021090  299668 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1119 02:43:09.657354  302848 node_ready.go:57] node "embed-certs-811173" has "Ready":"False" status (will retry)
	W1119 02:43:12.157245  302848 node_ready.go:57] node "embed-certs-811173" has "Ready":"False" status (will retry)
	I1119 02:43:14.022184  299668 addons.go:515] duration metric: took 492.963117ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1119 02:43:14.308619  299668 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-837474" context rescaled to 1 replicas
	I1119 02:43:12.922563  306860 addons.go:515] duration metric: took 479.097332ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1119 02:43:13.231221  306860 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-167150" context rescaled to 1 replicas
	W1119 02:43:14.732655  306860 node_ready.go:57] node "default-k8s-diff-port-167150" has "Ready":"False" status (will retry)
	W1119 02:43:14.157530  302848 node_ready.go:57] node "embed-certs-811173" has "Ready":"False" status (will retry)
	W1119 02:43:16.157612  302848 node_ready.go:57] node "embed-certs-811173" has "Ready":"False" status (will retry)
	I1119 02:43:18.657467  302848 node_ready.go:49] node "embed-certs-811173" is "Ready"
	I1119 02:43:18.657570  302848 node_ready.go:38] duration metric: took 11.003423276s for node "embed-certs-811173" to be "Ready" ...
	I1119 02:43:18.657596  302848 api_server.go:52] waiting for apiserver process to appear ...
	I1119 02:43:18.657639  302848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 02:43:18.670551  302848 api_server.go:72] duration metric: took 11.375418064s to wait for apiserver process to appear ...
	I1119 02:43:18.670593  302848 api_server.go:88] waiting for apiserver healthz status ...
	I1119 02:43:18.670611  302848 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1119 02:43:18.675195  302848 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1119 02:43:18.676254  302848 api_server.go:141] control plane version: v1.34.1
	I1119 02:43:18.676282  302848 api_server.go:131] duration metric: took 5.680617ms to wait for apiserver health ...
	I1119 02:43:18.676292  302848 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 02:43:18.679796  302848 system_pods.go:59] 8 kube-system pods found
	I1119 02:43:18.679829  302848 system_pods.go:61] "coredns-66bc5c9577-6zqr2" [45763e00-8d07-4cd1-bc77-8131988ad187] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:43:18.679837  302848 system_pods.go:61] "etcd-embed-certs-811173" [aa91bc11-b985-43ed-bb19-226f47adb517] Running
	I1119 02:43:18.679843  302848 system_pods.go:61] "kindnet-b2w9g" [0c0429a0-c37c-4eae-befb-d496610e882c] Running
	I1119 02:43:18.679849  302848 system_pods.go:61] "kube-apiserver-embed-certs-811173" [85c9fc14-94db-4732-ad9c-53fdb27b0bb5] Running
	I1119 02:43:18.679860  302848 system_pods.go:61] "kube-controller-manager-embed-certs-811173" [9944e561-0ab9-496d-baac-8b99bf3d6149] Running
	I1119 02:43:18.679865  302848 system_pods.go:61] "kube-proxy-s5bzz" [cebbac1b-ff7a-4bdf-b337-ec0b3b320728] Running
	I1119 02:43:18.679873  302848 system_pods.go:61] "kube-scheduler-embed-certs-811173" [6c1f1974-3341-47e3-875d-e5ec0abd032c] Running
	I1119 02:43:18.679881  302848 system_pods.go:61] "storage-provisioner" [4b41d056-28d4-4b4a-b546-2fb8c76fe688] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 02:43:18.679892  302848 system_pods.go:74] duration metric: took 3.592078ms to wait for pod list to return data ...
	I1119 02:43:18.679903  302848 default_sa.go:34] waiting for default service account to be created ...
	I1119 02:43:18.682287  302848 default_sa.go:45] found service account: "default"
	I1119 02:43:18.682313  302848 default_sa.go:55] duration metric: took 2.403388ms for default service account to be created ...
	I1119 02:43:18.682323  302848 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 02:43:18.684915  302848 system_pods.go:86] 8 kube-system pods found
	I1119 02:43:18.684945  302848 system_pods.go:89] "coredns-66bc5c9577-6zqr2" [45763e00-8d07-4cd1-bc77-8131988ad187] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:43:18.684954  302848 system_pods.go:89] "etcd-embed-certs-811173" [aa91bc11-b985-43ed-bb19-226f47adb517] Running
	I1119 02:43:18.684965  302848 system_pods.go:89] "kindnet-b2w9g" [0c0429a0-c37c-4eae-befb-d496610e882c] Running
	I1119 02:43:18.684971  302848 system_pods.go:89] "kube-apiserver-embed-certs-811173" [85c9fc14-94db-4732-ad9c-53fdb27b0bb5] Running
	I1119 02:43:18.684980  302848 system_pods.go:89] "kube-controller-manager-embed-certs-811173" [9944e561-0ab9-496d-baac-8b99bf3d6149] Running
	I1119 02:43:18.684986  302848 system_pods.go:89] "kube-proxy-s5bzz" [cebbac1b-ff7a-4bdf-b337-ec0b3b320728] Running
	I1119 02:43:18.684993  302848 system_pods.go:89] "kube-scheduler-embed-certs-811173" [6c1f1974-3341-47e3-875d-e5ec0abd032c] Running
	I1119 02:43:18.685000  302848 system_pods.go:89] "storage-provisioner" [4b41d056-28d4-4b4a-b546-2fb8c76fe688] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 02:43:18.685025  302848 retry.go:31] will retry after 210.702103ms: missing components: kube-dns
	I1119 02:43:18.900340  302848 system_pods.go:86] 8 kube-system pods found
	I1119 02:43:18.900379  302848 system_pods.go:89] "coredns-66bc5c9577-6zqr2" [45763e00-8d07-4cd1-bc77-8131988ad187] Running
	I1119 02:43:18.900388  302848 system_pods.go:89] "etcd-embed-certs-811173" [aa91bc11-b985-43ed-bb19-226f47adb517] Running
	I1119 02:43:18.900394  302848 system_pods.go:89] "kindnet-b2w9g" [0c0429a0-c37c-4eae-befb-d496610e882c] Running
	I1119 02:43:18.900400  302848 system_pods.go:89] "kube-apiserver-embed-certs-811173" [85c9fc14-94db-4732-ad9c-53fdb27b0bb5] Running
	I1119 02:43:18.900410  302848 system_pods.go:89] "kube-controller-manager-embed-certs-811173" [9944e561-0ab9-496d-baac-8b99bf3d6149] Running
	I1119 02:43:18.900415  302848 system_pods.go:89] "kube-proxy-s5bzz" [cebbac1b-ff7a-4bdf-b337-ec0b3b320728] Running
	I1119 02:43:18.900424  302848 system_pods.go:89] "kube-scheduler-embed-certs-811173" [6c1f1974-3341-47e3-875d-e5ec0abd032c] Running
	I1119 02:43:18.900441  302848 system_pods.go:89] "storage-provisioner" [4b41d056-28d4-4b4a-b546-2fb8c76fe688] Running
	I1119 02:43:18.900455  302848 system_pods.go:126] duration metric: took 218.125466ms to wait for k8s-apps to be running ...
	I1119 02:43:18.900467  302848 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 02:43:18.900516  302848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 02:43:18.914258  302848 system_svc.go:56] duration metric: took 13.781732ms WaitForService to wait for kubelet
	I1119 02:43:18.914285  302848 kubeadm.go:587] duration metric: took 11.619154777s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 02:43:18.914308  302848 node_conditions.go:102] verifying NodePressure condition ...
	I1119 02:43:18.917624  302848 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1119 02:43:18.917653  302848 node_conditions.go:123] node cpu capacity is 8
	I1119 02:43:18.917668  302848 node_conditions.go:105] duration metric: took 3.351447ms to run NodePressure ...
	I1119 02:43:18.917682  302848 start.go:242] waiting for startup goroutines ...
	I1119 02:43:18.917691  302848 start.go:247] waiting for cluster config update ...
	I1119 02:43:18.917704  302848 start.go:256] writing updated cluster config ...
	I1119 02:43:18.918010  302848 ssh_runner.go:195] Run: rm -f paused
	I1119 02:43:18.922579  302848 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 02:43:18.927046  302848 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-6zqr2" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:18.932042  302848 pod_ready.go:94] pod "coredns-66bc5c9577-6zqr2" is "Ready"
	I1119 02:43:18.932062  302848 pod_ready.go:86] duration metric: took 4.995305ms for pod "coredns-66bc5c9577-6zqr2" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:18.934146  302848 pod_ready.go:83] waiting for pod "etcd-embed-certs-811173" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:18.938004  302848 pod_ready.go:94] pod "etcd-embed-certs-811173" is "Ready"
	I1119 02:43:18.938027  302848 pod_ready.go:86] duration metric: took 3.859982ms for pod "etcd-embed-certs-811173" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:18.939959  302848 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-811173" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:18.943426  302848 pod_ready.go:94] pod "kube-apiserver-embed-certs-811173" is "Ready"
	I1119 02:43:18.943477  302848 pod_ready.go:86] duration metric: took 3.498122ms for pod "kube-apiserver-embed-certs-811173" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:18.945295  302848 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-811173" in "kube-system" namespace to be "Ready" or be gone ...
	
	
	==> CRI-O <==
	Nov 19 02:43:08 old-k8s-version-987573 crio[779]: time="2025-11-19T02:43:08.706134075Z" level=info msg="Starting container: 093781e7a6f090b0e6cad5c44342e034087551e1d468d974123ff0d7b598d8b9" id=989a8a26-1ca6-444f-98e4-0b1be92b280c name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 02:43:08 old-k8s-version-987573 crio[779]: time="2025-11-19T02:43:08.709244193Z" level=info msg="Started container" PID=2184 containerID=093781e7a6f090b0e6cad5c44342e034087551e1d468d974123ff0d7b598d8b9 description=kube-system/coredns-5dd5756b68-djd8r/coredns id=989a8a26-1ca6-444f-98e4-0b1be92b280c name=/runtime.v1.RuntimeService/StartContainer sandboxID=1613b70706104c6d8943e5fc39a65ac26898eabe27de2178d9817c09f0f16c8d
	Nov 19 02:43:11 old-k8s-version-987573 crio[779]: time="2025-11-19T02:43:11.772330012Z" level=info msg="Running pod sandbox: default/busybox/POD" id=6be63286-5002-4009-995c-82f2d3b4c553 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 02:43:11 old-k8s-version-987573 crio[779]: time="2025-11-19T02:43:11.772393397Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:43:11 old-k8s-version-987573 crio[779]: time="2025-11-19T02:43:11.777652886Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:33b85093da0f118d4da44799fd259cb4b6c0962c8900d4328e918feeaf013b13 UID:9c204876-422a-41f9-9047-80e08d35da45 NetNS:/var/run/netns/9889602a-97fe-4835-b050-1cd25543ed4c Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000129518}] Aliases:map[]}"
	Nov 19 02:43:11 old-k8s-version-987573 crio[779]: time="2025-11-19T02:43:11.777687671Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 19 02:43:11 old-k8s-version-987573 crio[779]: time="2025-11-19T02:43:11.787927777Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:33b85093da0f118d4da44799fd259cb4b6c0962c8900d4328e918feeaf013b13 UID:9c204876-422a-41f9-9047-80e08d35da45 NetNS:/var/run/netns/9889602a-97fe-4835-b050-1cd25543ed4c Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000129518}] Aliases:map[]}"
	Nov 19 02:43:11 old-k8s-version-987573 crio[779]: time="2025-11-19T02:43:11.788048107Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 19 02:43:11 old-k8s-version-987573 crio[779]: time="2025-11-19T02:43:11.788721599Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 19 02:43:11 old-k8s-version-987573 crio[779]: time="2025-11-19T02:43:11.789621126Z" level=info msg="Ran pod sandbox 33b85093da0f118d4da44799fd259cb4b6c0962c8900d4328e918feeaf013b13 with infra container: default/busybox/POD" id=6be63286-5002-4009-995c-82f2d3b4c553 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 02:43:11 old-k8s-version-987573 crio[779]: time="2025-11-19T02:43:11.790693941Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=878e339d-1efe-4bf2-97f1-dcf8bba3b521 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 02:43:11 old-k8s-version-987573 crio[779]: time="2025-11-19T02:43:11.790787676Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=878e339d-1efe-4bf2-97f1-dcf8bba3b521 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 02:43:11 old-k8s-version-987573 crio[779]: time="2025-11-19T02:43:11.790833997Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=878e339d-1efe-4bf2-97f1-dcf8bba3b521 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 02:43:11 old-k8s-version-987573 crio[779]: time="2025-11-19T02:43:11.791277312Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=459885e2-dd81-40b1-82ae-ebc4c5d5e820 name=/runtime.v1.ImageService/PullImage
	Nov 19 02:43:11 old-k8s-version-987573 crio[779]: time="2025-11-19T02:43:11.794796075Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 19 02:43:12 old-k8s-version-987573 crio[779]: time="2025-11-19T02:43:12.51917773Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=459885e2-dd81-40b1-82ae-ebc4c5d5e820 name=/runtime.v1.ImageService/PullImage
	Nov 19 02:43:12 old-k8s-version-987573 crio[779]: time="2025-11-19T02:43:12.520132032Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=fbd86be3-c164-45c1-a45d-4a629be0719f name=/runtime.v1.ImageService/ImageStatus
	Nov 19 02:43:12 old-k8s-version-987573 crio[779]: time="2025-11-19T02:43:12.522234719Z" level=info msg="Creating container: default/busybox/busybox" id=8955f10c-30ef-49e0-b3f0-3cdf31183fbb name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 02:43:12 old-k8s-version-987573 crio[779]: time="2025-11-19T02:43:12.522371979Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:43:12 old-k8s-version-987573 crio[779]: time="2025-11-19T02:43:12.52730449Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:43:12 old-k8s-version-987573 crio[779]: time="2025-11-19T02:43:12.527893945Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:43:12 old-k8s-version-987573 crio[779]: time="2025-11-19T02:43:12.567957498Z" level=info msg="Created container ba681445582aebb0fc8c73dcaa7272182d28e3b7bd6cee6c3d2fbcd1fc7e1148: default/busybox/busybox" id=8955f10c-30ef-49e0-b3f0-3cdf31183fbb name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 02:43:12 old-k8s-version-987573 crio[779]: time="2025-11-19T02:43:12.568635901Z" level=info msg="Starting container: ba681445582aebb0fc8c73dcaa7272182d28e3b7bd6cee6c3d2fbcd1fc7e1148" id=a60d4b91-1433-4c13-8c3c-965f7762ad2c name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 02:43:12 old-k8s-version-987573 crio[779]: time="2025-11-19T02:43:12.570912963Z" level=info msg="Started container" PID=2260 containerID=ba681445582aebb0fc8c73dcaa7272182d28e3b7bd6cee6c3d2fbcd1fc7e1148 description=default/busybox/busybox id=a60d4b91-1433-4c13-8c3c-965f7762ad2c name=/runtime.v1.RuntimeService/StartContainer sandboxID=33b85093da0f118d4da44799fd259cb4b6c0962c8900d4328e918feeaf013b13
	Nov 19 02:43:18 old-k8s-version-987573 crio[779]: time="2025-11-19T02:43:18.563061244Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	ba681445582ae       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   33b85093da0f1       busybox                                          default
	093781e7a6f09       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      11 seconds ago      Running             coredns                   0                   1613b70706104       coredns-5dd5756b68-djd8r                         kube-system
	a8abb636a5c03       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 seconds ago      Running             storage-provisioner       0                   8bec2e059c79d       storage-provisioner                              kube-system
	04d8348435ffb       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    22 seconds ago      Running             kindnet-cni               0                   d36b9a5436812       kindnet-57t4v                                    kube-system
	03353ccab3740       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                      24 seconds ago      Running             kube-proxy                0                   9f33b25e70336       kube-proxy-tmqhk                                 kube-system
	3165a6b498940       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      44 seconds ago      Running             etcd                      0                   63782105724b1       etcd-old-k8s-version-987573                      kube-system
	56e3c1efec3a7       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                      44 seconds ago      Running             kube-scheduler            0                   e1f3c51ca2266       kube-scheduler-old-k8s-version-987573            kube-system
	05d1fd62adb2b       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                      44 seconds ago      Running             kube-apiserver            0                   09abdd407a2f6       kube-apiserver-old-k8s-version-987573            kube-system
	9a05b0c676634       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                      44 seconds ago      Running             kube-controller-manager   0                   3c61b0863ffda       kube-controller-manager-old-k8s-version-987573   kube-system
	
	
	==> coredns [093781e7a6f090b0e6cad5c44342e034087551e1d468d974123ff0d7b598d8b9] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:40346 - 57700 "HINFO IN 2721516588908259450.2777143633873784177. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.144829217s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-987573
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-987573
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ebd49efe8b7d44f89fc96e21beffcd64dfee8277
	                    minikube.k8s.io/name=old-k8s-version-987573
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T02_42_42_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 02:42:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-987573
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 02:43:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 02:43:12 +0000   Wed, 19 Nov 2025 02:42:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 02:43:12 +0000   Wed, 19 Nov 2025 02:42:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 02:43:12 +0000   Wed, 19 Nov 2025 02:42:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 02:43:12 +0000   Wed, 19 Nov 2025 02:43:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-987573
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863340Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863340Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf10fb2f940d419c1d138723691cfee8
	  System UUID:                7b61050c-e4d6-47f6-aa9c-d45cf03b4e83
	  Boot ID:                    b74e7f3d-91af-4c1b-82f6-1745dfd2e678
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 coredns-5dd5756b68-djd8r                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-old-k8s-version-987573                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         38s
	  kube-system                 kindnet-57t4v                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-old-k8s-version-987573             250m (3%)     0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 kube-controller-manager-old-k8s-version-987573    200m (2%)     0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 kube-proxy-tmqhk                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-old-k8s-version-987573             100m (1%)     0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 24s   kube-proxy       
	  Normal  Starting                 38s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  38s   kubelet          Node old-k8s-version-987573 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    38s   kubelet          Node old-k8s-version-987573 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     38s   kubelet          Node old-k8s-version-987573 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           25s   node-controller  Node old-k8s-version-987573 event: Registered Node old-k8s-version-987573 in Controller
	  Normal  NodeReady                11s   kubelet          Node old-k8s-version-987573 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: c6 2a 10 38 98 ce 7e af 25 93 50 8b 08 00
	[Nov19 02:40] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 69 19 13 d2 34 08 06
	[  +0.000303] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff de 82 c7 57 ef 49 08 06
	[Nov19 02:41] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 82 b6 eb cd 7e a7 08 06
	[  +0.001170] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 6a 20 a4 3b 82 10 08 06
	[ +12.842438] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 1e 35 97 65 5e 2e 08 06
	[  +4.187285] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ba 20 d4 3c 25 0c 08 06
	[ +19.742639] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6e e8 d1 08 45 d2 08 06
	[  +0.000327] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ba 20 d4 3c 25 0c 08 06
	[Nov19 02:42] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 2b 58 8a 05 dc 08 06
	[  +0.000340] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 b6 eb cd 7e a7 08 06
	[ +10.661146] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b6 1d bb 8d c6 48 08 06
	[  +0.000382] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 1e 35 97 65 5e 2e 08 06
	
	
	==> etcd [3165a6b49894027b5294e55145787191271d29fdeee4efc9dadc5d3d71c707ae] <==
	{"level":"info","ts":"2025-11-19T02:42:39.097221Z","caller":"traceutil/trace.go:171","msg":"trace[1452782708] transaction","detail":"{read_only:false; response_revision:87; number_of_response:1; }","duration":"186.35685ms","start":"2025-11-19T02:42:38.910844Z","end":"2025-11-19T02:42:39.097201Z","steps":["trace[1452782708] 'process raft request'  (duration: 128.228164ms)","trace[1452782708] 'compare'  (duration: 58.004151ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-19T02:42:48.585786Z","caller":"traceutil/trace.go:171","msg":"trace[1609945777] transaction","detail":"{read_only:false; response_revision:258; number_of_response:1; }","duration":"102.233812ms","start":"2025-11-19T02:42:48.483527Z","end":"2025-11-19T02:42:48.585761Z","steps":["trace[1609945777] 'process raft request'  (duration: 102.078576ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T02:42:54.241574Z","caller":"traceutil/trace.go:171","msg":"trace[927553272] transaction","detail":"{read_only:false; response_revision:291; number_of_response:1; }","duration":"130.038143ms","start":"2025-11-19T02:42:54.111511Z","end":"2025-11-19T02:42:54.241549Z","steps":["trace[927553272] 'process raft request'  (duration: 129.842455ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T02:42:54.357945Z","caller":"traceutil/trace.go:171","msg":"trace[2075317864] transaction","detail":"{read_only:false; response_revision:293; number_of_response:1; }","duration":"110.261172ms","start":"2025-11-19T02:42:54.24767Z","end":"2025-11-19T02:42:54.357931Z","steps":["trace[2075317864] 'process raft request'  (duration: 110.227555ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T02:42:54.357963Z","caller":"traceutil/trace.go:171","msg":"trace[117362567] linearizableReadLoop","detail":"{readStateIndex:302; appliedIndex:301; }","duration":"116.458877ms","start":"2025-11-19T02:42:54.241485Z","end":"2025-11-19T02:42:54.357944Z","steps":["trace[117362567] 'read index received'  (duration: 113.319033ms)","trace[117362567] 'applied index is now lower than readState.Index'  (duration: 3.137587ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-19T02:42:54.35793Z","caller":"traceutil/trace.go:171","msg":"trace[1266457557] transaction","detail":"{read_only:false; response_revision:292; number_of_response:1; }","duration":"210.224955ms","start":"2025-11-19T02:42:54.14768Z","end":"2025-11-19T02:42:54.357905Z","steps":["trace[1266457557] 'process raft request'  (duration: 207.041728ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-19T02:42:54.358087Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"165.134235ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/service-account-controller\" ","response":"range_response_count:1 size:218"}
	{"level":"info","ts":"2025-11-19T02:42:54.358464Z","caller":"traceutil/trace.go:171","msg":"trace[439321341] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/service-account-controller; range_end:; response_count:1; response_revision:293; }","duration":"165.489442ms","start":"2025-11-19T02:42:54.192925Z","end":"2025-11-19T02:42:54.358415Z","steps":["trace[439321341] 'agreement among raft nodes before linearized reading'  (duration: 165.065999ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-19T02:42:54.358351Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"114.287705ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/bootstrap-signer\" ","response":"range_response_count:1 size:197"}
	{"level":"info","ts":"2025-11-19T02:42:54.358713Z","caller":"traceutil/trace.go:171","msg":"trace[844583827] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/bootstrap-signer; range_end:; response_count:1; response_revision:293; }","duration":"114.656733ms","start":"2025-11-19T02:42:54.244046Z","end":"2025-11-19T02:42:54.358703Z","steps":["trace[844583827] 'agreement among raft nodes before linearized reading'  (duration: 114.25917ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T02:42:54.485773Z","caller":"traceutil/trace.go:171","msg":"trace[216974738] transaction","detail":"{read_only:false; response_revision:295; number_of_response:1; }","duration":"120.300142ms","start":"2025-11-19T02:42:54.365455Z","end":"2025-11-19T02:42:54.485756Z","steps":["trace[216974738] 'process raft request'  (duration: 107.199562ms)","trace[216974738] 'compare'  (duration: 12.975071ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-19T02:42:54.485975Z","caller":"traceutil/trace.go:171","msg":"trace[1563076025] transaction","detail":"{read_only:false; response_revision:296; number_of_response:1; }","duration":"118.122787ms","start":"2025-11-19T02:42:54.36784Z","end":"2025-11-19T02:42:54.485963Z","steps":["trace[1563076025] 'process raft request'  (duration: 117.880713ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T02:42:54.48619Z","caller":"traceutil/trace.go:171","msg":"trace[1489958258] transaction","detail":"{read_only:false; response_revision:297; number_of_response:1; }","duration":"117.471468ms","start":"2025-11-19T02:42:54.368708Z","end":"2025-11-19T02:42:54.48618Z","steps":["trace[1489958258] 'process raft request'  (duration: 117.205951ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T02:42:54.486466Z","caller":"traceutil/trace.go:171","msg":"trace[290112058] transaction","detail":"{read_only:false; response_revision:298; number_of_response:1; }","duration":"117.671519ms","start":"2025-11-19T02:42:54.368755Z","end":"2025-11-19T02:42:54.486427Z","steps":["trace[290112058] 'process raft request'  (duration: 117.368624ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T02:42:54.75723Z","caller":"traceutil/trace.go:171","msg":"trace[534610776] transaction","detail":"{read_only:false; response_revision:305; number_of_response:1; }","duration":"178.789324ms","start":"2025-11-19T02:42:54.578422Z","end":"2025-11-19T02:42:54.757211Z","steps":["trace[534610776] 'process raft request'  (duration: 178.305873ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T02:42:54.760901Z","caller":"traceutil/trace.go:171","msg":"trace[1448664390] linearizableReadLoop","detail":"{readStateIndex:316; appliedIndex:314; }","duration":"167.758513ms","start":"2025-11-19T02:42:54.593125Z","end":"2025-11-19T02:42:54.760884Z","steps":["trace[1448664390] 'read index received'  (duration: 163.657791ms)","trace[1448664390] 'applied index is now lower than readState.Index'  (duration: 4.100055ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-19T02:42:54.760975Z","caller":"traceutil/trace.go:171","msg":"trace[1621238460] transaction","detail":"{read_only:false; response_revision:306; number_of_response:1; }","duration":"181.29071ms","start":"2025-11-19T02:42:54.579672Z","end":"2025-11-19T02:42:54.760963Z","steps":["trace[1621238460] 'process raft request'  (duration: 181.125008ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-19T02:42:54.761027Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"167.905649ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/endpointslice-controller\" ","response":"range_response_count:1 size:214"}
	{"level":"info","ts":"2025-11-19T02:42:54.761055Z","caller":"traceutil/trace.go:171","msg":"trace[2089255444] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/endpointslice-controller; range_end:; response_count:1; response_revision:306; }","duration":"167.955522ms","start":"2025-11-19T02:42:54.59309Z","end":"2025-11-19T02:42:54.761046Z","steps":["trace[2089255444] 'agreement among raft nodes before linearized reading'  (duration: 167.876391ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-19T02:42:54.761061Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"117.354914ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/service-account-controller\" ","response":"range_response_count:1 size:218"}
	{"level":"info","ts":"2025-11-19T02:42:54.761105Z","caller":"traceutil/trace.go:171","msg":"trace[2133173907] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/service-account-controller; range_end:; response_count:1; response_revision:306; }","duration":"117.406941ms","start":"2025-11-19T02:42:54.643685Z","end":"2025-11-19T02:42:54.761092Z","steps":["trace[2133173907] 'agreement among raft nodes before linearized reading'  (duration: 117.318868ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T02:42:57.693425Z","caller":"traceutil/trace.go:171","msg":"trace[2071928966] linearizableReadLoop","detail":"{readStateIndex:388; appliedIndex:387; }","duration":"121.208865ms","start":"2025-11-19T02:42:57.572198Z","end":"2025-11-19T02:42:57.693407Z","steps":["trace[2071928966] 'read index received'  (duration: 121.085138ms)","trace[2071928966] 'applied index is now lower than readState.Index'  (duration: 122.98µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-19T02:42:57.693558Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"121.375404ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/old-k8s-version-987573\" ","response":"range_response_count:1 size:5710"}
	{"level":"info","ts":"2025-11-19T02:42:57.693585Z","caller":"traceutil/trace.go:171","msg":"trace[1305848711] range","detail":"{range_begin:/registry/minions/old-k8s-version-987573; range_end:; response_count:1; response_revision:375; }","duration":"121.419337ms","start":"2025-11-19T02:42:57.572159Z","end":"2025-11-19T02:42:57.693578Z","steps":["trace[1305848711] 'agreement among raft nodes before linearized reading'  (duration: 121.348941ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T02:42:57.693657Z","caller":"traceutil/trace.go:171","msg":"trace[631349052] transaction","detail":"{read_only:false; response_revision:375; number_of_response:1; }","duration":"124.697114ms","start":"2025-11-19T02:42:57.568937Z","end":"2025-11-19T02:42:57.693634Z","steps":["trace[631349052] 'process raft request'  (duration: 124.338671ms)"],"step_count":1}
	
	
	==> kernel <==
	 02:43:19 up  1:25,  0 user,  load average: 4.83, 3.43, 2.22
	Linux old-k8s-version-987573 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [04d8348435ffb04d3f48d118cc4d94a6430f00461cc7d144b5a70697818f949a] <==
	I1119 02:42:57.453881       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 02:42:57.454120       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1119 02:42:57.454265       1 main.go:148] setting mtu 1500 for CNI 
	I1119 02:42:57.454287       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 02:42:57.454307       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T02:42:57Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 02:42:57.656721       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 02:42:57.656865       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 02:42:57.656921       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 02:42:57.750376       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1119 02:42:58.152566       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1119 02:42:58.152600       1 metrics.go:72] Registering metrics
	I1119 02:42:58.152665       1 controller.go:711] "Syncing nftables rules"
	I1119 02:43:07.665154       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1119 02:43:07.665215       1 main.go:301] handling current node
	I1119 02:43:17.656520       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1119 02:43:17.656558       1 main.go:301] handling current node
	
	
	==> kube-apiserver [05d1fd62adb2bbcb31d134719f79599eca5a8675a8d727f249740f29ebeca1da] <==
	I1119 02:42:37.486013       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1119 02:42:37.486023       1 cache.go:39] Caches are synced for autoregister controller
	I1119 02:42:37.486013       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1119 02:42:37.487718       1 controller.go:624] quota admission added evaluator for: namespaces
	I1119 02:42:37.501940       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1119 02:42:37.501959       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	E1119 02:42:37.579884       1 controller.go:145] while syncing ConfigMap "kube-system/kube-apiserver-legacy-service-account-token-tracking", err: namespaces "kube-system" not found
	E1119 02:42:37.580871       1 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1119 02:42:38.108375       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 02:42:38.396768       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1119 02:42:38.400466       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1119 02:42:38.400484       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 02:42:39.552842       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1119 02:42:39.595982       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1119 02:42:39.702555       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1119 02:42:39.708901       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1119 02:42:39.710470       1 controller.go:624] quota admission added evaluator for: endpoints
	I1119 02:42:39.714745       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1119 02:42:40.453298       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1119 02:42:41.374889       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1119 02:42:41.398072       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1119 02:42:41.410254       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1119 02:42:54.768816       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1119 02:42:54.800156       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1119 02:42:54.800156       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [9a05b0c676634a93f405efb9919898db728616c7420b4f8a43c0986dbec27a79] <==
	I1119 02:42:54.146169       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="old-k8s-version-987573"
	I1119 02:42:54.146231       1 node_lifecycle_controller.go:1029] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1119 02:42:54.243231       1 range_allocator.go:380] "Set node PodCIDR" node="old-k8s-version-987573" podCIDRs=["10.244.0.0/24"]
	I1119 02:42:54.455295       1 shared_informer.go:318] Caches are synced for garbage collector
	I1119 02:42:54.455328       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1119 02:42:54.465780       1 shared_informer.go:318] Caches are synced for garbage collector
	I1119 02:42:54.772383       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1119 02:42:54.808503       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-tmqhk"
	I1119 02:42:54.810104       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-57t4v"
	I1119 02:42:54.956084       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-smrnv"
	I1119 02:42:54.965496       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-djd8r"
	I1119 02:42:54.972423       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="200.640864ms"
	I1119 02:42:54.988574       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="16.011383ms"
	I1119 02:42:54.988731       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="109.598µs"
	I1119 02:42:55.595523       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1119 02:42:55.607042       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-smrnv"
	I1119 02:42:55.613461       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="18.909413ms"
	I1119 02:42:55.619003       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="5.482497ms"
	I1119 02:42:55.619093       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="51.457µs"
	I1119 02:43:08.045418       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="110.404µs"
	I1119 02:43:08.061046       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="88.595µs"
	I1119 02:43:09.148864       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1119 02:43:09.597709       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="75.973µs"
	I1119 02:43:09.621659       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.113161ms"
	I1119 02:43:09.621774       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="70.038µs"
	
	
	==> kube-proxy [03353ccab374075c847b719e923023adfa7cde03309af724647ae43b52520d17] <==
	I1119 02:42:55.396207       1 server_others.go:69] "Using iptables proxy"
	I1119 02:42:55.464949       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1119 02:42:55.502604       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 02:42:55.505803       1 server_others.go:152] "Using iptables Proxier"
	I1119 02:42:55.505908       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1119 02:42:55.505936       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1119 02:42:55.505980       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1119 02:42:55.516386       1 server.go:846] "Version info" version="v1.28.0"
	I1119 02:42:55.516473       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 02:42:55.517868       1 config.go:188] "Starting service config controller"
	I1119 02:42:55.517946       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1119 02:42:55.518089       1 config.go:97] "Starting endpoint slice config controller"
	I1119 02:42:55.518557       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1119 02:42:55.520805       1 config.go:315] "Starting node config controller"
	I1119 02:42:55.520866       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1119 02:42:55.618577       1 shared_informer.go:318] Caches are synced for service config
	I1119 02:42:55.621871       1 shared_informer.go:318] Caches are synced for node config
	I1119 02:42:55.623073       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [56e3c1efec3a749441cf568131a48ed203f6bd3b5880bde23d9b283545eeca9e] <==
	W1119 02:42:38.492408       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1119 02:42:38.492475       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1119 02:42:38.608622       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1119 02:42:38.608658       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1119 02:42:38.633937       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1119 02:42:38.633981       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1119 02:42:38.684235       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1119 02:42:38.684268       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1119 02:42:38.777105       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1119 02:42:38.777133       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1119 02:42:38.784386       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1119 02:42:38.784414       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1119 02:42:38.794945       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1119 02:42:38.794973       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1119 02:42:38.853510       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1119 02:42:38.853540       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1119 02:42:38.920569       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1119 02:42:38.920615       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1119 02:42:38.954556       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1119 02:42:38.954599       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1119 02:42:38.963043       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1119 02:42:38.963076       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1119 02:42:39.069247       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1119 02:42:39.069283       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I1119 02:42:41.266536       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 19 02:42:54 old-k8s-version-987573 kubelet[1422]: I1119 02:42:54.292373    1422 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 19 02:42:54 old-k8s-version-987573 kubelet[1422]: I1119 02:42:54.293159    1422 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 19 02:42:54 old-k8s-version-987573 kubelet[1422]: I1119 02:42:54.814495    1422 topology_manager.go:215] "Topology Admit Handler" podUID="ef6bd301-05f1-4196-99a7-73e8ff59dc4b" podNamespace="kube-system" podName="kube-proxy-tmqhk"
	Nov 19 02:42:54 old-k8s-version-987573 kubelet[1422]: I1119 02:42:54.817096    1422 topology_manager.go:215] "Topology Admit Handler" podUID="0db2f280-bd80-4848-b27d-5419aa484d18" podNamespace="kube-system" podName="kindnet-57t4v"
	Nov 19 02:42:54 old-k8s-version-987573 kubelet[1422]: I1119 02:42:54.843277    1422 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqh5g\" (UniqueName: \"kubernetes.io/projected/0db2f280-bd80-4848-b27d-5419aa484d18-kube-api-access-tqh5g\") pod \"kindnet-57t4v\" (UID: \"0db2f280-bd80-4848-b27d-5419aa484d18\") " pod="kube-system/kindnet-57t4v"
	Nov 19 02:42:54 old-k8s-version-987573 kubelet[1422]: I1119 02:42:54.843477    1422 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ef6bd301-05f1-4196-99a7-73e8ff59dc4b-lib-modules\") pod \"kube-proxy-tmqhk\" (UID: \"ef6bd301-05f1-4196-99a7-73e8ff59dc4b\") " pod="kube-system/kube-proxy-tmqhk"
	Nov 19 02:42:54 old-k8s-version-987573 kubelet[1422]: I1119 02:42:54.843525    1422 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0db2f280-bd80-4848-b27d-5419aa484d18-lib-modules\") pod \"kindnet-57t4v\" (UID: \"0db2f280-bd80-4848-b27d-5419aa484d18\") " pod="kube-system/kindnet-57t4v"
	Nov 19 02:42:54 old-k8s-version-987573 kubelet[1422]: I1119 02:42:54.843557    1422 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ef6bd301-05f1-4196-99a7-73e8ff59dc4b-kube-proxy\") pod \"kube-proxy-tmqhk\" (UID: \"ef6bd301-05f1-4196-99a7-73e8ff59dc4b\") " pod="kube-system/kube-proxy-tmqhk"
	Nov 19 02:42:54 old-k8s-version-987573 kubelet[1422]: I1119 02:42:54.843585    1422 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ef6bd301-05f1-4196-99a7-73e8ff59dc4b-xtables-lock\") pod \"kube-proxy-tmqhk\" (UID: \"ef6bd301-05f1-4196-99a7-73e8ff59dc4b\") " pod="kube-system/kube-proxy-tmqhk"
	Nov 19 02:42:54 old-k8s-version-987573 kubelet[1422]: I1119 02:42:54.843616    1422 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gczw4\" (UniqueName: \"kubernetes.io/projected/ef6bd301-05f1-4196-99a7-73e8ff59dc4b-kube-api-access-gczw4\") pod \"kube-proxy-tmqhk\" (UID: \"ef6bd301-05f1-4196-99a7-73e8ff59dc4b\") " pod="kube-system/kube-proxy-tmqhk"
	Nov 19 02:42:54 old-k8s-version-987573 kubelet[1422]: I1119 02:42:54.843646    1422 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/0db2f280-bd80-4848-b27d-5419aa484d18-cni-cfg\") pod \"kindnet-57t4v\" (UID: \"0db2f280-bd80-4848-b27d-5419aa484d18\") " pod="kube-system/kindnet-57t4v"
	Nov 19 02:42:54 old-k8s-version-987573 kubelet[1422]: I1119 02:42:54.843675    1422 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0db2f280-bd80-4848-b27d-5419aa484d18-xtables-lock\") pod \"kindnet-57t4v\" (UID: \"0db2f280-bd80-4848-b27d-5419aa484d18\") " pod="kube-system/kindnet-57t4v"
	Nov 19 02:42:57 old-k8s-version-987573 kubelet[1422]: I1119 02:42:57.695191    1422 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-tmqhk" podStartSLOduration=3.695138079 podCreationTimestamp="2025-11-19 02:42:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 02:42:55.571993707 +0000 UTC m=+14.235091180" watchObservedRunningTime="2025-11-19 02:42:57.695138079 +0000 UTC m=+16.358235544"
	Nov 19 02:42:57 old-k8s-version-987573 kubelet[1422]: I1119 02:42:57.695330    1422 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-57t4v" podStartSLOduration=1.5989420239999999 podCreationTimestamp="2025-11-19 02:42:54 +0000 UTC" firstStartedPulling="2025-11-19 02:42:55.141540578 +0000 UTC m=+13.804638024" lastFinishedPulling="2025-11-19 02:42:57.237904777 +0000 UTC m=+15.901002230" observedRunningTime="2025-11-19 02:42:57.694839506 +0000 UTC m=+16.357936970" watchObservedRunningTime="2025-11-19 02:42:57.69530623 +0000 UTC m=+16.358403695"
	Nov 19 02:43:08 old-k8s-version-987573 kubelet[1422]: I1119 02:43:08.022859    1422 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 19 02:43:08 old-k8s-version-987573 kubelet[1422]: I1119 02:43:08.044094    1422 topology_manager.go:215] "Topology Admit Handler" podUID="abe94ba2-07c5-4f03-ab28-00ea277fdc56" podNamespace="kube-system" podName="storage-provisioner"
	Nov 19 02:43:08 old-k8s-version-987573 kubelet[1422]: I1119 02:43:08.045384    1422 topology_manager.go:215] "Topology Admit Handler" podUID="38b8c793-304e-42c1-b2a0-ecd1032a5962" podNamespace="kube-system" podName="coredns-5dd5756b68-djd8r"
	Nov 19 02:43:08 old-k8s-version-987573 kubelet[1422]: I1119 02:43:08.238421    1422 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4ljj\" (UniqueName: \"kubernetes.io/projected/abe94ba2-07c5-4f03-ab28-00ea277fdc56-kube-api-access-m4ljj\") pod \"storage-provisioner\" (UID: \"abe94ba2-07c5-4f03-ab28-00ea277fdc56\") " pod="kube-system/storage-provisioner"
	Nov 19 02:43:08 old-k8s-version-987573 kubelet[1422]: I1119 02:43:08.238500    1422 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/abe94ba2-07c5-4f03-ab28-00ea277fdc56-tmp\") pod \"storage-provisioner\" (UID: \"abe94ba2-07c5-4f03-ab28-00ea277fdc56\") " pod="kube-system/storage-provisioner"
	Nov 19 02:43:08 old-k8s-version-987573 kubelet[1422]: I1119 02:43:08.238613    1422 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/38b8c793-304e-42c1-b2a0-ecd1032a5962-config-volume\") pod \"coredns-5dd5756b68-djd8r\" (UID: \"38b8c793-304e-42c1-b2a0-ecd1032a5962\") " pod="kube-system/coredns-5dd5756b68-djd8r"
	Nov 19 02:43:08 old-k8s-version-987573 kubelet[1422]: I1119 02:43:08.238660    1422 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmnkl\" (UniqueName: \"kubernetes.io/projected/38b8c793-304e-42c1-b2a0-ecd1032a5962-kube-api-access-jmnkl\") pod \"coredns-5dd5756b68-djd8r\" (UID: \"38b8c793-304e-42c1-b2a0-ecd1032a5962\") " pod="kube-system/coredns-5dd5756b68-djd8r"
	Nov 19 02:43:09 old-k8s-version-987573 kubelet[1422]: I1119 02:43:09.606171    1422 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-djd8r" podStartSLOduration=15.606120792 podCreationTimestamp="2025-11-19 02:42:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 02:43:09.59736792 +0000 UTC m=+28.260465382" watchObservedRunningTime="2025-11-19 02:43:09.606120792 +0000 UTC m=+28.269218254"
	Nov 19 02:43:09 old-k8s-version-987573 kubelet[1422]: I1119 02:43:09.606644    1422 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.606603942 podCreationTimestamp="2025-11-19 02:42:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 02:43:09.606311394 +0000 UTC m=+28.269408867" watchObservedRunningTime="2025-11-19 02:43:09.606603942 +0000 UTC m=+28.269701406"
	Nov 19 02:43:11 old-k8s-version-987573 kubelet[1422]: I1119 02:43:11.470446    1422 topology_manager.go:215] "Topology Admit Handler" podUID="9c204876-422a-41f9-9047-80e08d35da45" podNamespace="default" podName="busybox"
	Nov 19 02:43:11 old-k8s-version-987573 kubelet[1422]: I1119 02:43:11.660703    1422 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rj25l\" (UniqueName: \"kubernetes.io/projected/9c204876-422a-41f9-9047-80e08d35da45-kube-api-access-rj25l\") pod \"busybox\" (UID: \"9c204876-422a-41f9-9047-80e08d35da45\") " pod="default/busybox"
	
	
	==> storage-provisioner [a8abb636a5c03e1fad01b819122cc032d8ac711f94417240ebd863de36c8bd4c] <==
	I1119 02:43:08.721127       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1119 02:43:08.732773       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1119 02:43:08.732827       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1119 02:43:08.741216       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1119 02:43:08.741344       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a8c272ec-f692-4939-9547-0410130d4526", APIVersion:"v1", ResourceVersion:"395", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-987573_e584746b-7c7d-4db1-98f6-f341fd27c385 became leader
	I1119 02:43:08.741423       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-987573_e584746b-7c7d-4db1-98f6-f341fd27c385!
	I1119 02:43:08.842145       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-987573_e584746b-7c7d-4db1-98f6-f341fd27c385!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-987573 -n old-k8s-version-987573
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-987573 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-811173 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-811173 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (236.867271ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:43:29Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-811173 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-811173 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-811173 describe deploy/metrics-server -n kube-system: exit status 1 (56.028069ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-811173 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-811173
helpers_test.go:243: (dbg) docker inspect embed-certs-811173:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f59ac2b4a856949cb2e1ed6807a501c825bed5125b05f7ff483858655e957668",
	        "Created": "2025-11-19T02:42:39.275670124Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 305564,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T02:42:39.315043561Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a368e3d71517ce17114afb6c9921965419df972dd0e2d32a9973a8946f0910a3",
	        "ResolvConfPath": "/var/lib/docker/containers/f59ac2b4a856949cb2e1ed6807a501c825bed5125b05f7ff483858655e957668/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f59ac2b4a856949cb2e1ed6807a501c825bed5125b05f7ff483858655e957668/hostname",
	        "HostsPath": "/var/lib/docker/containers/f59ac2b4a856949cb2e1ed6807a501c825bed5125b05f7ff483858655e957668/hosts",
	        "LogPath": "/var/lib/docker/containers/f59ac2b4a856949cb2e1ed6807a501c825bed5125b05f7ff483858655e957668/f59ac2b4a856949cb2e1ed6807a501c825bed5125b05f7ff483858655e957668-json.log",
	        "Name": "/embed-certs-811173",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-811173:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-811173",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f59ac2b4a856949cb2e1ed6807a501c825bed5125b05f7ff483858655e957668",
	                "LowerDir": "/var/lib/docker/overlay2/07d90f0d6a437038a8f0a347be8e1b31b31817fee59231702439e2ea962044d8-init/diff:/var/lib/docker/overlay2/10dcbd04dd53463a9d8dc947ef22df9efa1b3a4d07707317c3869fa39d5b22a7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/07d90f0d6a437038a8f0a347be8e1b31b31817fee59231702439e2ea962044d8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/07d90f0d6a437038a8f0a347be8e1b31b31817fee59231702439e2ea962044d8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/07d90f0d6a437038a8f0a347be8e1b31b31817fee59231702439e2ea962044d8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-811173",
	                "Source": "/var/lib/docker/volumes/embed-certs-811173/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-811173",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-811173",
	                "name.minikube.sigs.k8s.io": "embed-certs-811173",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "94579e87d925dac4a24e2d57da5d9bf673a16fa141bfff21084606a0d72a9ac0",
	            "SandboxKey": "/var/run/docker/netns/94579e87d925",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-811173": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3129c4b605594e1d463b2d85e5ed79f025bb6cff93cf80cdce990db8936b5a9c",
	                    "EndpointID": "54b2fdc8c33e1afde8cc55a78c3c4a5a56636b3f6001f945a130de942d4032dd",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "1e:73:ca:df:f4:60",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-811173",
	                        "f59ac2b4a856"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-811173 -n embed-certs-811173
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-811173 logs -n 25
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p bridge-001617 sudo systemctl status docker --all --full --no-pager                                                                                                    │ bridge-001617                │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │                     │
	│ ssh     │ -p bridge-001617 sudo systemctl cat docker --no-pager                                                                                                                    │ bridge-001617                │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │ 19 Nov 25 02:42 UTC │
	│ ssh     │ -p bridge-001617 sudo cat /etc/docker/daemon.json                                                                                                                        │ bridge-001617                │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │                     │
	│ ssh     │ -p bridge-001617 sudo docker system info                                                                                                                                 │ bridge-001617                │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │                     │
	│ ssh     │ -p bridge-001617 sudo systemctl status cri-docker --all --full --no-pager                                                                                                │ bridge-001617                │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │                     │
	│ ssh     │ -p bridge-001617 sudo systemctl cat cri-docker --no-pager                                                                                                                │ bridge-001617                │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │ 19 Nov 25 02:42 UTC │
	│ ssh     │ -p bridge-001617 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                           │ bridge-001617                │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │                     │
	│ start   │ -p embed-certs-811173 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                   │ embed-certs-811173           │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │ 19 Nov 25 02:43 UTC │
	│ ssh     │ -p bridge-001617 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                     │ bridge-001617                │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │ 19 Nov 25 02:42 UTC │
	│ ssh     │ -p bridge-001617 sudo cri-dockerd --version                                                                                                                              │ bridge-001617                │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │ 19 Nov 25 02:42 UTC │
	│ ssh     │ -p bridge-001617 sudo systemctl status containerd --all --full --no-pager                                                                                                │ bridge-001617                │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │                     │
	│ ssh     │ -p bridge-001617 sudo systemctl cat containerd --no-pager                                                                                                                │ bridge-001617                │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │ 19 Nov 25 02:42 UTC │
	│ ssh     │ -p bridge-001617 sudo cat /lib/systemd/system/containerd.service                                                                                                         │ bridge-001617                │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │ 19 Nov 25 02:42 UTC │
	│ ssh     │ -p bridge-001617 sudo cat /etc/containerd/config.toml                                                                                                                    │ bridge-001617                │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │ 19 Nov 25 02:42 UTC │
	│ ssh     │ -p bridge-001617 sudo containerd config dump                                                                                                                             │ bridge-001617                │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │ 19 Nov 25 02:42 UTC │
	│ ssh     │ -p bridge-001617 sudo systemctl status crio --all --full --no-pager                                                                                                      │ bridge-001617                │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │ 19 Nov 25 02:42 UTC │
	│ ssh     │ -p bridge-001617 sudo systemctl cat crio --no-pager                                                                                                                      │ bridge-001617                │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │ 19 Nov 25 02:42 UTC │
	│ ssh     │ -p bridge-001617 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                            │ bridge-001617                │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │ 19 Nov 25 02:42 UTC │
	│ ssh     │ -p bridge-001617 sudo crio config                                                                                                                                        │ bridge-001617                │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │ 19 Nov 25 02:42 UTC │
	│ delete  │ -p bridge-001617                                                                                                                                                         │ bridge-001617                │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │ 19 Nov 25 02:42 UTC │
	│ delete  │ -p disable-driver-mounts-682232                                                                                                                                          │ disable-driver-mounts-682232 │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │ 19 Nov 25 02:42 UTC │
	│ start   │ -p default-k8s-diff-port-167150 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ default-k8s-diff-port-167150 │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │ 19 Nov 25 02:43 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-987573 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                             │ old-k8s-version-987573       │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │                     │
	│ stop    │ -p old-k8s-version-987573 --alsologtostderr -v=3                                                                                                                         │ old-k8s-version-987573       │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-811173 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                 │ embed-certs-811173           │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 02:42:42
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 02:42:42.176241  306860 out.go:360] Setting OutFile to fd 1 ...
	I1119 02:42:42.176542  306860 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:42:42.176552  306860 out.go:374] Setting ErrFile to fd 2...
	I1119 02:42:42.176557  306860 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:42:42.176798  306860 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-11126/.minikube/bin
	I1119 02:42:42.177312  306860 out.go:368] Setting JSON to false
	I1119 02:42:42.178694  306860 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5109,"bootTime":1763515053,"procs":301,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 02:42:42.178817  306860 start.go:143] virtualization: kvm guest
	I1119 02:42:42.181266  306860 out.go:179] * [default-k8s-diff-port-167150] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1119 02:42:42.182506  306860 out.go:179]   - MINIKUBE_LOCATION=21924
	I1119 02:42:42.182508  306860 notify.go:221] Checking for updates...
	I1119 02:42:42.184984  306860 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 02:42:42.186380  306860 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21924-11126/kubeconfig
	I1119 02:42:42.187520  306860 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-11126/.minikube
	I1119 02:42:42.188641  306860 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1119 02:42:42.189749  306860 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 02:42:42.191476  306860 config.go:182] Loaded profile config "embed-certs-811173": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:42:42.191626  306860 config.go:182] Loaded profile config "no-preload-837474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:42:42.191747  306860 config.go:182] Loaded profile config "old-k8s-version-987573": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1119 02:42:42.191879  306860 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 02:42:42.219938  306860 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1119 02:42:42.220096  306860 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 02:42:42.291707  306860 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:81 OomKillDisable:false NGoroutines:93 SystemTime:2025-11-19 02:42:42.280719148 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652060160 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 02:42:42.291851  306860 docker.go:319] overlay module found
	I1119 02:42:42.294039  306860 out.go:179] * Using the docker driver based on user configuration
	I1119 02:42:42.295025  306860 start.go:309] selected driver: docker
	I1119 02:42:42.295045  306860 start.go:930] validating driver "docker" against <nil>
	I1119 02:42:42.295071  306860 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 02:42:42.295643  306860 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 02:42:42.358641  306860 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:81 OomKillDisable:false NGoroutines:93 SystemTime:2025-11-19 02:42:42.347786548 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652060160 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 02:42:42.358876  306860 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1119 02:42:42.359101  306860 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 02:42:42.361283  306860 out.go:179] * Using Docker driver with root privileges
	I1119 02:42:42.362628  306860 cni.go:84] Creating CNI manager for ""
	I1119 02:42:42.362714  306860 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 02:42:42.362728  306860 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1119 02:42:42.362817  306860 start.go:353] cluster config:
	{Name:default-k8s-diff-port-167150 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-167150 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:42:42.364219  306860 out.go:179] * Starting "default-k8s-diff-port-167150" primary control-plane node in "default-k8s-diff-port-167150" cluster
	I1119 02:42:42.367198  306860 cache.go:134] Beginning downloading kic base image for docker with crio
	I1119 02:42:42.368425  306860 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1119 02:42:42.369910  306860 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 02:42:42.369948  306860 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21924-11126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1119 02:42:42.369957  306860 cache.go:65] Caching tarball of preloaded images
	I1119 02:42:42.369996  306860 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1119 02:42:42.370067  306860 preload.go:238] Found /home/jenkins/minikube-integration/21924-11126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1119 02:42:42.370082  306860 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1119 02:42:42.370209  306860 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/config.json ...
	I1119 02:42:42.370241  306860 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/config.json: {Name:mkcddbcc964a690b001741c541d540f001994a84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:42:42.393924  306860 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1119 02:42:42.393944  306860 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1119 02:42:42.393962  306860 cache.go:243] Successfully downloaded all kic artifacts
	I1119 02:42:42.393994  306860 start.go:360] acquireMachinesLock for default-k8s-diff-port-167150: {Name:mk2e469e9e78dab6a8d53f30fec89bc1e449a209 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 02:42:42.394102  306860 start.go:364] duration metric: took 89.942µs to acquireMachinesLock for "default-k8s-diff-port-167150"
	I1119 02:42:42.394130  306860 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-167150 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-167150 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 02:42:42.394220  306860 start.go:125] createHost starting for "" (driver="docker")
	I1119 02:42:39.183788  302848 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21924-11126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-811173:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir: (4.239456846s)
	I1119 02:42:39.183822  302848 kic.go:203] duration metric: took 4.239611554s to extract preloaded images to volume ...
	W1119 02:42:39.183909  302848 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1119 02:42:39.183954  302848 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1119 02:42:39.184001  302848 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1119 02:42:39.255629  302848 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-811173 --name embed-certs-811173 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-811173 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-811173 --network embed-certs-811173 --ip 192.168.85.2 --volume embed-certs-811173:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a
	I1119 02:42:39.648577  302848 cli_runner.go:164] Run: docker container inspect embed-certs-811173 --format={{.State.Running}}
	I1119 02:42:39.668032  302848 cli_runner.go:164] Run: docker container inspect embed-certs-811173 --format={{.State.Status}}
	I1119 02:42:39.687214  302848 cli_runner.go:164] Run: docker exec embed-certs-811173 stat /var/lib/dpkg/alternatives/iptables
	I1119 02:42:39.745898  302848 oci.go:144] the created container "embed-certs-811173" has a running status.
	I1119 02:42:39.745933  302848 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21924-11126/.minikube/machines/embed-certs-811173/id_rsa...
	I1119 02:42:40.188034  302848 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21924-11126/.minikube/machines/embed-certs-811173/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1119 02:42:40.217982  302848 cli_runner.go:164] Run: docker container inspect embed-certs-811173 --format={{.State.Status}}
	I1119 02:42:40.237916  302848 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1119 02:42:40.237940  302848 kic_runner.go:114] Args: [docker exec --privileged embed-certs-811173 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1119 02:42:40.289247  302848 cli_runner.go:164] Run: docker container inspect embed-certs-811173 --format={{.State.Status}}
	I1119 02:42:40.309791  302848 machine.go:94] provisionDockerMachine start ...
	I1119 02:42:40.309919  302848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-811173
	I1119 02:42:40.329857  302848 main.go:143] libmachine: Using SSH client type: native
	I1119 02:42:40.330085  302848 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1119 02:42:40.330094  302848 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 02:42:40.330814  302848 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40226->127.0.0.1:33098: read: connection reset by peer
	I1119 02:42:43.466968  302848 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-811173
	
	I1119 02:42:43.466997  302848 ubuntu.go:182] provisioning hostname "embed-certs-811173"
	I1119 02:42:43.467046  302848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-811173
	I1119 02:42:43.487761  302848 main.go:143] libmachine: Using SSH client type: native
	I1119 02:42:43.488030  302848 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1119 02:42:43.488051  302848 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-811173 && echo "embed-certs-811173" | sudo tee /etc/hostname
	I1119 02:42:43.643097  302848 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-811173
	
	I1119 02:42:43.643198  302848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-811173
	I1119 02:42:43.663378  302848 main.go:143] libmachine: Using SSH client type: native
	I1119 02:42:43.663636  302848 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1119 02:42:43.663655  302848 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-811173' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-811173/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-811173' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 02:42:43.798171  302848 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 02:42:43.798205  302848 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21924-11126/.minikube CaCertPath:/home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21924-11126/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21924-11126/.minikube}
	I1119 02:42:43.798228  302848 ubuntu.go:190] setting up certificates
	I1119 02:42:43.798241  302848 provision.go:84] configureAuth start
	I1119 02:42:43.798305  302848 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-811173
	I1119 02:42:43.819034  302848 provision.go:143] copyHostCerts
	I1119 02:42:43.819102  302848 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-11126/.minikube/cert.pem, removing ...
	I1119 02:42:43.819115  302848 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-11126/.minikube/cert.pem
	I1119 02:42:43.819176  302848 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21924-11126/.minikube/cert.pem (1123 bytes)
	I1119 02:42:43.819262  302848 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-11126/.minikube/key.pem, removing ...
	I1119 02:42:43.819270  302848 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-11126/.minikube/key.pem
	I1119 02:42:43.819297  302848 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21924-11126/.minikube/key.pem (1675 bytes)
	I1119 02:42:43.819360  302848 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-11126/.minikube/ca.pem, removing ...
	I1119 02:42:43.819368  302848 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-11126/.minikube/ca.pem
	I1119 02:42:43.819392  302848 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21924-11126/.minikube/ca.pem (1082 bytes)
	I1119 02:42:43.819475  302848 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21924-11126/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca-key.pem org=jenkins.embed-certs-811173 san=[127.0.0.1 192.168.85.2 embed-certs-811173 localhost minikube]
	I1119 02:42:44.009209  302848 provision.go:177] copyRemoteCerts
	I1119 02:42:44.009280  302848 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 02:42:44.009327  302848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-811173
	I1119 02:42:44.029510  302848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/embed-certs-811173/id_rsa Username:docker}
	I1119 02:42:40.627209  299668 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.471693628s)
	I1119 02:42:40.627247  299668 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1119 02:42:40.627277  299668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1119 02:42:40.627374  299668 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.472009201s)
	I1119 02:42:40.627402  299668 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21924-11126/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1119 02:42:40.627449  299668 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1119 02:42:40.627495  299668 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	I1119 02:42:42.166462  299668 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (1.538920814s)
	I1119 02:42:42.166489  299668 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21924-11126/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1119 02:42:42.166520  299668 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1119 02:42:42.166567  299668 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1119 02:42:43.179025  299668 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1: (1.012437665s)
	I1119 02:42:43.179053  299668 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21924-11126/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1119 02:42:43.179080  299668 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1119 02:42:43.179117  299668 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	I1119 02:42:41.454319  291163 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1119 02:42:41.462416  291163 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1119 02:42:41.462446  291163 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1119 02:42:41.496324  291163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1119 02:42:42.356676  291163 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1119 02:42:42.356833  291163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-987573 minikube.k8s.io/updated_at=2025_11_19T02_42_42_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=ebd49efe8b7d44f89fc96e21beffcd64dfee8277 minikube.k8s.io/name=old-k8s-version-987573 minikube.k8s.io/primary=true
	I1119 02:42:42.356833  291163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:42:42.367139  291163 ops.go:34] apiserver oom_adj: -16
	I1119 02:42:42.457034  291163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:42:42.957751  291163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:42:43.457688  291163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:42:43.957153  291163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:42:44.457654  291163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:42:44.957568  291163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:42:45.457760  291163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:42:42.395695  306860 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1119 02:42:42.395917  306860 start.go:159] libmachine.API.Create for "default-k8s-diff-port-167150" (driver="docker")
	I1119 02:42:42.395950  306860 client.go:173] LocalClient.Create starting
	I1119 02:42:42.396027  306860 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem
	I1119 02:42:42.396063  306860 main.go:143] libmachine: Decoding PEM data...
	I1119 02:42:42.396092  306860 main.go:143] libmachine: Parsing certificate...
	I1119 02:42:42.396166  306860 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21924-11126/.minikube/certs/cert.pem
	I1119 02:42:42.396197  306860 main.go:143] libmachine: Decoding PEM data...
	I1119 02:42:42.396215  306860 main.go:143] libmachine: Parsing certificate...
	I1119 02:42:42.396556  306860 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-167150 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1119 02:42:42.414929  306860 cli_runner.go:211] docker network inspect default-k8s-diff-port-167150 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1119 02:42:42.415012  306860 network_create.go:284] running [docker network inspect default-k8s-diff-port-167150] to gather additional debugging logs...
	I1119 02:42:42.415033  306860 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-167150
	W1119 02:42:42.434734  306860 cli_runner.go:211] docker network inspect default-k8s-diff-port-167150 returned with exit code 1
	I1119 02:42:42.434765  306860 network_create.go:287] error running [docker network inspect default-k8s-diff-port-167150]: docker network inspect default-k8s-diff-port-167150: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-167150 not found
	I1119 02:42:42.434797  306860 network_create.go:289] output of [docker network inspect default-k8s-diff-port-167150]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-167150 not found
	
	** /stderr **
	I1119 02:42:42.434886  306860 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 02:42:42.454554  306860 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-84ce244e4c23 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:16:55:7c:db:e3:4e} reservation:<nil>}
	I1119 02:42:42.455185  306860 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-70e7d73f86d8 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:8a:64:3f:46:8e:7a} reservation:<nil>}
	I1119 02:42:42.455956  306860 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-d7ef477b5a23 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:fa:eb:22:b3:62:92} reservation:<nil>}
	I1119 02:42:42.456451  306860 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-4d7fb52c0aef IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:1a:ad:9c:9a:f3:90} reservation:<nil>}
	I1119 02:42:42.457310  306860 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-3129c4b60559 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:0e:04:d6:88:46:9c} reservation:<nil>}
	I1119 02:42:42.458231  306860 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f14070}
	I1119 02:42:42.458263  306860 network_create.go:124] attempt to create docker network default-k8s-diff-port-167150 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1119 02:42:42.458321  306860 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-167150 default-k8s-diff-port-167150
	I1119 02:42:42.508901  306860 network_create.go:108] docker network default-k8s-diff-port-167150 192.168.94.0/24 created
	I1119 02:42:42.508935  306860 kic.go:121] calculated static IP "192.168.94.2" for the "default-k8s-diff-port-167150" container
	I1119 02:42:42.509018  306860 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1119 02:42:42.530727  306860 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-167150 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-167150 --label created_by.minikube.sigs.k8s.io=true
	I1119 02:42:42.549909  306860 oci.go:103] Successfully created a docker volume default-k8s-diff-port-167150
	I1119 02:42:42.549999  306860 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-167150-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-167150 --entrypoint /usr/bin/test -v default-k8s-diff-port-167150:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -d /var/lib
	I1119 02:42:43.411678  306860 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-167150
	I1119 02:42:43.411748  306860 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 02:42:43.411762  306860 kic.go:194] Starting extracting preloaded images to volume ...
	I1119 02:42:43.411813  306860 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21924-11126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-167150:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir
	I1119 02:42:44.129173  302848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1119 02:42:44.149365  302848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1119 02:42:44.166610  302848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1119 02:42:44.183427  302848 provision.go:87] duration metric: took 385.168944ms to configureAuth
	I1119 02:42:44.183464  302848 ubuntu.go:206] setting minikube options for container-runtime
	I1119 02:42:44.183643  302848 config.go:182] Loaded profile config "embed-certs-811173": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:42:44.183766  302848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-811173
	I1119 02:42:44.202233  302848 main.go:143] libmachine: Using SSH client type: native
	I1119 02:42:44.202417  302848 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1119 02:42:44.202444  302848 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 02:42:44.503275  302848 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 02:42:44.503306  302848 machine.go:97] duration metric: took 4.193483812s to provisionDockerMachine
	I1119 02:42:44.503317  302848 client.go:176] duration metric: took 10.179262279s to LocalClient.Create
	I1119 02:42:44.503337  302848 start.go:167] duration metric: took 10.179334886s to libmachine.API.Create "embed-certs-811173"
	I1119 02:42:44.503346  302848 start.go:293] postStartSetup for "embed-certs-811173" (driver="docker")
	I1119 02:42:44.503358  302848 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 02:42:44.503415  302848 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 02:42:44.503480  302848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-811173
	I1119 02:42:44.526986  302848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/embed-certs-811173/id_rsa Username:docker}
	I1119 02:42:44.639041  302848 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 02:42:44.644425  302848 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 02:42:44.644489  302848 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 02:42:44.644502  302848 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-11126/.minikube/addons for local assets ...
	I1119 02:42:44.644562  302848 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-11126/.minikube/files for local assets ...
	I1119 02:42:44.644662  302848 filesync.go:149] local asset: /home/jenkins/minikube-integration/21924-11126/.minikube/files/etc/ssl/certs/146342.pem -> 146342.pem in /etc/ssl/certs
	I1119 02:42:44.644802  302848 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 02:42:44.657698  302848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/files/etc/ssl/certs/146342.pem --> /etc/ssl/certs/146342.pem (1708 bytes)
	I1119 02:42:44.684627  302848 start.go:296] duration metric: took 181.267139ms for postStartSetup
	I1119 02:42:44.685672  302848 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-811173
	I1119 02:42:44.709637  302848 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/config.json ...
	I1119 02:42:44.709970  302848 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 02:42:44.710086  302848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-811173
	I1119 02:42:44.735883  302848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/embed-certs-811173/id_rsa Username:docker}
	I1119 02:42:44.842589  302848 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 02:42:44.848370  302848 start.go:128] duration metric: took 10.52622031s to createHost
	I1119 02:42:44.848397  302848 start.go:83] releasing machines lock for "embed-certs-811173", held for 10.526348738s
	I1119 02:42:44.848480  302848 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-811173
	I1119 02:42:44.873209  302848 ssh_runner.go:195] Run: cat /version.json
	I1119 02:42:44.873265  302848 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 02:42:44.873267  302848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-811173
	I1119 02:42:44.873325  302848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-811173
	I1119 02:42:44.895290  302848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/embed-certs-811173/id_rsa Username:docker}
	I1119 02:42:44.896255  302848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/embed-certs-811173/id_rsa Username:docker}
	I1119 02:42:45.089046  302848 ssh_runner.go:195] Run: systemctl --version
	I1119 02:42:45.096166  302848 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 02:42:45.135030  302848 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 02:42:45.140127  302848 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 02:42:45.140199  302848 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 02:42:45.170487  302848 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1119 02:42:45.170513  302848 start.go:496] detecting cgroup driver to use...
	I1119 02:42:45.170545  302848 detect.go:190] detected "systemd" cgroup driver on host os
	I1119 02:42:45.170595  302848 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 02:42:45.188031  302848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 02:42:45.201633  302848 docker.go:218] disabling cri-docker service (if available) ...
	I1119 02:42:45.201682  302848 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 02:42:45.219175  302848 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 02:42:45.238631  302848 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 02:42:45.357829  302848 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 02:42:45.467480  302848 docker.go:234] disabling docker service ...
	I1119 02:42:45.467546  302848 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 02:42:45.493546  302848 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 02:42:45.508908  302848 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 02:42:45.630796  302848 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 02:42:45.744606  302848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 02:42:45.758583  302848 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 02:42:45.802834  302848 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 02:42:45.802888  302848 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:42:45.815732  302848 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1119 02:42:45.815833  302848 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:42:45.825707  302848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:42:45.847178  302848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:42:45.877522  302848 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 02:42:45.886218  302848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:42:45.939829  302848 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:42:46.000872  302848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:42:46.058642  302848 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 02:42:46.066800  302848 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 02:42:46.074598  302848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:42:46.154622  302848 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 02:42:49.212232  302848 ssh_runner.go:235] Completed: sudo systemctl restart crio: (3.057563682s)
	I1119 02:42:49.212266  302848 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 02:42:49.212309  302848 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 02:42:49.217067  302848 start.go:564] Will wait 60s for crictl version
	I1119 02:42:49.217124  302848 ssh_runner.go:195] Run: which crictl
	I1119 02:42:49.221132  302848 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 02:42:49.251469  302848 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1119 02:42:49.251561  302848 ssh_runner.go:195] Run: crio --version
	I1119 02:42:49.280463  302848 ssh_runner.go:195] Run: crio --version
	I1119 02:42:49.310498  302848 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1119 02:42:48.297963  299668 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (5.118818905s)
	I1119 02:42:48.297993  299668 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21924-11126/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1119 02:42:48.298019  299668 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1119 02:42:48.298066  299668 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1119 02:42:49.881405  299668 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.583300882s)
	I1119 02:42:49.881450  299668 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21924-11126/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1119 02:42:49.881479  299668 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1119 02:42:49.881558  299668 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1
	I1119 02:42:45.957339  291163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:42:46.457346  291163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:42:46.957840  291163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:42:47.457460  291163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:42:47.957489  291163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:42:48.457490  291163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:42:48.957548  291163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:42:49.457120  291163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:42:49.957332  291163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:42:50.457258  291163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:42:49.311873  302848 cli_runner.go:164] Run: docker network inspect embed-certs-811173 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 02:42:49.337627  302848 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1119 02:42:49.343117  302848 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 02:42:49.363673  302848 kubeadm.go:884] updating cluster {Name:embed-certs-811173 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-811173 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 02:42:49.363803  302848 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 02:42:49.363881  302848 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 02:42:49.402301  302848 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 02:42:49.402327  302848 crio.go:433] Images already preloaded, skipping extraction
	I1119 02:42:49.402381  302848 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 02:42:49.432172  302848 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 02:42:49.432198  302848 cache_images.go:86] Images are preloaded, skipping loading
	I1119 02:42:49.432208  302848 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1119 02:42:49.432312  302848 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-811173 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-811173 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 02:42:49.432394  302848 ssh_runner.go:195] Run: crio config
	I1119 02:42:49.490697  302848 cni.go:84] Creating CNI manager for ""
	I1119 02:42:49.490766  302848 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 02:42:49.490806  302848 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 02:42:49.490847  302848 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-811173 NodeName:embed-certs-811173 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 02:42:49.491024  302848 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-811173"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 02:42:49.491099  302848 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 02:42:49.501687  302848 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 02:42:49.501746  302848 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 02:42:49.512773  302848 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1119 02:42:49.533263  302848 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 02:42:49.552949  302848 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1119 02:42:49.567525  302848 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1119 02:42:49.572161  302848 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 02:42:49.583669  302848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:42:49.696403  302848 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 02:42:49.727028  302848 certs.go:69] Setting up /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173 for IP: 192.168.85.2
	I1119 02:42:49.727140  302848 certs.go:195] generating shared ca certs ...
	I1119 02:42:49.727168  302848 certs.go:227] acquiring lock for ca certs: {Name:mkfa94aafd627ee4c1a185e05aa520339d3c22d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:42:49.727476  302848 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21924-11126/.minikube/ca.key
	I1119 02:42:49.727544  302848 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21924-11126/.minikube/proxy-client-ca.key
	I1119 02:42:49.727557  302848 certs.go:257] generating profile certs ...
	I1119 02:42:49.727625  302848 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/client.key
	I1119 02:42:49.727650  302848 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/client.crt with IP's: []
	I1119 02:42:50.145686  302848 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/client.crt ...
	I1119 02:42:50.145726  302848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/client.crt: {Name:mke65652a37d1645724814d58214d8122c0736b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:42:50.145910  302848 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/client.key ...
	I1119 02:42:50.145933  302848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/client.key: {Name:mk4ef5d0666a41b73aa30b3e0755e11f9f8fb3bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:42:50.146056  302848 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/apiserver.key.a0a915e4
	I1119 02:42:50.146079  302848 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/apiserver.crt.a0a915e4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1119 02:42:50.407271  302848 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/apiserver.crt.a0a915e4 ...
	I1119 02:42:50.407295  302848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/apiserver.crt.a0a915e4: {Name:mk5f035a33d372bd059255b16679fd50e2c33fba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:42:50.407442  302848 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/apiserver.key.a0a915e4 ...
	I1119 02:42:50.407456  302848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/apiserver.key.a0a915e4: {Name:mka92b1af7e6c09f8bfc52286518647800bcb5a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:42:50.407529  302848 certs.go:382] copying /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/apiserver.crt.a0a915e4 -> /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/apiserver.crt
	I1119 02:42:50.407602  302848 certs.go:386] copying /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/apiserver.key.a0a915e4 -> /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/apiserver.key
	I1119 02:42:50.407658  302848 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/proxy-client.key
	I1119 02:42:50.407673  302848 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/proxy-client.crt with IP's: []
	I1119 02:42:51.018427  302848 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/proxy-client.crt ...
	I1119 02:42:51.018475  302848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/proxy-client.crt: {Name:mkaf83dc022cbae8f555c0ae724724cf38e2e4bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:42:51.018641  302848 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/proxy-client.key ...
	I1119 02:42:51.018703  302848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/proxy-client.key: {Name:mk810704305f00f9b6af79898dc7dd3a9f2fe056 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:42:51.018949  302848 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/14634.pem (1338 bytes)
	W1119 02:42:51.019001  302848 certs.go:480] ignoring /home/jenkins/minikube-integration/21924-11126/.minikube/certs/14634_empty.pem, impossibly tiny 0 bytes
	I1119 02:42:51.019016  302848 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca-key.pem (1671 bytes)
	I1119 02:42:51.019050  302848 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem (1082 bytes)
	I1119 02:42:51.019085  302848 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/cert.pem (1123 bytes)
	I1119 02:42:51.019116  302848 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/key.pem (1675 bytes)
	I1119 02:42:51.019168  302848 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/files/etc/ssl/certs/146342.pem (1708 bytes)
	I1119 02:42:51.019875  302848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 02:42:51.045884  302848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1119 02:42:51.068119  302848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 02:42:51.085405  302848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1119 02:42:51.102412  302848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1119 02:42:51.119942  302848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1119 02:42:51.141845  302848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 02:42:51.163668  302848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1119 02:42:51.185276  302848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/files/etc/ssl/certs/146342.pem --> /usr/share/ca-certificates/146342.pem (1708 bytes)
	I1119 02:42:51.206376  302848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 02:42:51.223822  302848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/certs/14634.pem --> /usr/share/ca-certificates/14634.pem (1338 bytes)
	I1119 02:42:51.240933  302848 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 02:42:51.254070  302848 ssh_runner.go:195] Run: openssl version
	I1119 02:42:51.260133  302848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 02:42:51.268759  302848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:42:51.272373  302848 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 01:56 /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:42:51.272418  302848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:42:51.314661  302848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 02:42:51.325625  302848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14634.pem && ln -fs /usr/share/ca-certificates/14634.pem /etc/ssl/certs/14634.pem"
	I1119 02:42:51.335401  302848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14634.pem
	I1119 02:42:51.339792  302848 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 02:02 /usr/share/ca-certificates/14634.pem
	I1119 02:42:51.339844  302848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14634.pem
	I1119 02:42:51.374219  302848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14634.pem /etc/ssl/certs/51391683.0"
	I1119 02:42:51.382719  302848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/146342.pem && ln -fs /usr/share/ca-certificates/146342.pem /etc/ssl/certs/146342.pem"
	I1119 02:42:51.391325  302848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/146342.pem
	I1119 02:42:51.395186  302848 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 02:02 /usr/share/ca-certificates/146342.pem
	I1119 02:42:51.395235  302848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/146342.pem
	I1119 02:42:51.433387  302848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/146342.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 02:42:51.441878  302848 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 02:42:51.446149  302848 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1119 02:42:51.446206  302848 kubeadm.go:401] StartCluster: {Name:embed-certs-811173 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-811173 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:42:51.446288  302848 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 02:42:51.446341  302848 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 02:42:51.474545  302848 cri.go:89] found id: ""
	I1119 02:42:51.474598  302848 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 02:42:51.483078  302848 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1119 02:42:51.491910  302848 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1119 02:42:51.491960  302848 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1119 02:42:51.500593  302848 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1119 02:42:51.500610  302848 kubeadm.go:158] found existing configuration files:
	
	I1119 02:42:51.500655  302848 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1119 02:42:51.508497  302848 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1119 02:42:51.508546  302848 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1119 02:42:51.516422  302848 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1119 02:42:51.525757  302848 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1119 02:42:51.525807  302848 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1119 02:42:51.536275  302848 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1119 02:42:51.545935  302848 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1119 02:42:51.545987  302848 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1119 02:42:51.554976  302848 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1119 02:42:51.563559  302848 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1119 02:42:51.563604  302848 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1119 02:42:51.570652  302848 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1119 02:42:51.615030  302848 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1119 02:42:51.615151  302848 kubeadm.go:319] [preflight] Running pre-flight checks
	I1119 02:42:51.639511  302848 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1119 02:42:51.639676  302848 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1119 02:42:51.639872  302848 kubeadm.go:319] OS: Linux
	I1119 02:42:51.639979  302848 kubeadm.go:319] CGROUPS_CPU: enabled
	I1119 02:42:51.640073  302848 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1119 02:42:51.640147  302848 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1119 02:42:51.640208  302848 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1119 02:42:51.640267  302848 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1119 02:42:51.640326  302848 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1119 02:42:51.640387  302848 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1119 02:42:51.640451  302848 kubeadm.go:319] CGROUPS_IO: enabled
	I1119 02:42:51.708966  302848 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1119 02:42:51.709135  302848 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1119 02:42:51.709283  302848 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1119 02:42:51.716801  302848 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1119 02:42:49.083522  306860 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21924-11126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-167150:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir: (5.671639706s)
	I1119 02:42:49.083553  306860 kic.go:203] duration metric: took 5.671789118s to extract preloaded images to volume ...
	W1119 02:42:49.083624  306860 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1119 02:42:49.083651  306860 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1119 02:42:49.083684  306860 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1119 02:42:49.149882  306860 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-167150 --name default-k8s-diff-port-167150 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-167150 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-167150 --network default-k8s-diff-port-167150 --ip 192.168.94.2 --volume default-k8s-diff-port-167150:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a
	I1119 02:42:49.500594  306860 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-167150 --format={{.State.Running}}
	I1119 02:42:49.523895  306860 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-167150 --format={{.State.Status}}
	I1119 02:42:49.547442  306860 cli_runner.go:164] Run: docker exec default-k8s-diff-port-167150 stat /var/lib/dpkg/alternatives/iptables
	I1119 02:42:49.600101  306860 oci.go:144] the created container "default-k8s-diff-port-167150" has a running status.
	I1119 02:42:49.600142  306860 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21924-11126/.minikube/machines/default-k8s-diff-port-167150/id_rsa...
	I1119 02:42:50.269489  306860 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21924-11126/.minikube/machines/default-k8s-diff-port-167150/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1119 02:42:50.295459  306860 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-167150 --format={{.State.Status}}
	I1119 02:42:50.315528  306860 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1119 02:42:50.315562  306860 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-167150 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1119 02:42:50.356860  306860 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-167150 --format={{.State.Status}}
	I1119 02:42:50.374600  306860 machine.go:94] provisionDockerMachine start ...
	I1119 02:42:50.374689  306860 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-167150
	I1119 02:42:50.391114  306860 main.go:143] libmachine: Using SSH client type: native
	I1119 02:42:50.391363  306860 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1119 02:42:50.391382  306860 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 02:42:50.523354  306860 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-167150
	
	I1119 02:42:50.523388  306860 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-167150"
	I1119 02:42:50.523491  306860 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-167150
	I1119 02:42:50.548578  306860 main.go:143] libmachine: Using SSH client type: native
	I1119 02:42:50.549009  306860 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1119 02:42:50.549031  306860 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-167150 && echo "default-k8s-diff-port-167150" | sudo tee /etc/hostname
	I1119 02:42:50.708967  306860 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-167150
	
	I1119 02:42:50.709056  306860 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-167150
	I1119 02:42:50.729860  306860 main.go:143] libmachine: Using SSH client type: native
	I1119 02:42:50.730154  306860 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1119 02:42:50.730186  306860 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-167150' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-167150/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-167150' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 02:42:50.877302  306860 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 02:42:50.877332  306860 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21924-11126/.minikube CaCertPath:/home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21924-11126/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21924-11126/.minikube}
	I1119 02:42:50.877354  306860 ubuntu.go:190] setting up certificates
	I1119 02:42:50.877366  306860 provision.go:84] configureAuth start
	I1119 02:42:50.877421  306860 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-167150
	I1119 02:42:50.899681  306860 provision.go:143] copyHostCerts
	I1119 02:42:50.899742  306860 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-11126/.minikube/cert.pem, removing ...
	I1119 02:42:50.899755  306860 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-11126/.minikube/cert.pem
	I1119 02:42:50.899823  306860 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21924-11126/.minikube/cert.pem (1123 bytes)
	I1119 02:42:50.899935  306860 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-11126/.minikube/key.pem, removing ...
	I1119 02:42:50.899952  306860 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-11126/.minikube/key.pem
	I1119 02:42:50.899994  306860 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21924-11126/.minikube/key.pem (1675 bytes)
	I1119 02:42:50.900091  306860 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-11126/.minikube/ca.pem, removing ...
	I1119 02:42:50.900100  306860 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-11126/.minikube/ca.pem
	I1119 02:42:50.900133  306860 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21924-11126/.minikube/ca.pem (1082 bytes)
	I1119 02:42:50.900206  306860 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21924-11126/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-167150 san=[127.0.0.1 192.168.94.2 default-k8s-diff-port-167150 localhost minikube]
	I1119 02:42:51.790042  306860 provision.go:177] copyRemoteCerts
	I1119 02:42:51.790120  306860 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 02:42:51.790163  306860 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-167150
	I1119 02:42:51.812679  306860 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/default-k8s-diff-port-167150/id_rsa Username:docker}
	I1119 02:42:51.914566  306860 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1119 02:42:51.933520  306860 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1119 02:42:51.951210  306860 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1119 02:42:51.972791  306860 provision.go:87] duration metric: took 1.095412973s to configureAuth
	I1119 02:42:51.972820  306860 ubuntu.go:206] setting minikube options for container-runtime
	I1119 02:42:51.973010  306860 config.go:182] Loaded profile config "default-k8s-diff-port-167150": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:42:51.973126  306860 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-167150
	I1119 02:42:51.993887  306860 main.go:143] libmachine: Using SSH client type: native
	I1119 02:42:51.994333  306860 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1119 02:42:51.994382  306860 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 02:42:51.720233  302848 out.go:252]   - Generating certificates and keys ...
	I1119 02:42:51.720329  302848 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1119 02:42:51.720424  302848 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1119 02:42:52.110567  302848 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1119 02:42:52.469402  302848 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1119 02:42:52.783731  302848 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1119 02:42:53.170607  302848 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1119 02:42:53.607637  302848 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1119 02:42:53.607789  302848 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-811173 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1119 02:42:52.305265  306860 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 02:42:52.305290  306860 machine.go:97] duration metric: took 1.930670923s to provisionDockerMachine
	I1119 02:42:52.305303  306860 client.go:176] duration metric: took 9.909346044s to LocalClient.Create
	I1119 02:42:52.305321  306860 start.go:167] duration metric: took 9.909403032s to libmachine.API.Create "default-k8s-diff-port-167150"
	I1119 02:42:52.305331  306860 start.go:293] postStartSetup for "default-k8s-diff-port-167150" (driver="docker")
	I1119 02:42:52.305347  306860 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 02:42:52.305414  306860 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 02:42:52.305477  306860 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-167150
	I1119 02:42:52.326893  306860 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/default-k8s-diff-port-167150/id_rsa Username:docker}
	I1119 02:42:52.427784  306860 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 02:42:52.432280  306860 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 02:42:52.432314  306860 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 02:42:52.432326  306860 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-11126/.minikube/addons for local assets ...
	I1119 02:42:52.432378  306860 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-11126/.minikube/files for local assets ...
	I1119 02:42:52.432493  306860 filesync.go:149] local asset: /home/jenkins/minikube-integration/21924-11126/.minikube/files/etc/ssl/certs/146342.pem -> 146342.pem in /etc/ssl/certs
	I1119 02:42:52.432606  306860 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 02:42:52.440486  306860 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/files/etc/ssl/certs/146342.pem --> /etc/ssl/certs/146342.pem (1708 bytes)
	I1119 02:42:52.461537  306860 start.go:296] duration metric: took 156.190397ms for postStartSetup
	I1119 02:42:52.461851  306860 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-167150
	I1119 02:42:52.483860  306860 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/config.json ...
	I1119 02:42:52.484137  306860 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 02:42:52.484184  306860 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-167150
	I1119 02:42:52.504090  306860 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/default-k8s-diff-port-167150/id_rsa Username:docker}
	I1119 02:42:52.602388  306860 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 02:42:52.607059  306860 start.go:128] duration metric: took 10.212819294s to createHost
	I1119 02:42:52.607086  306860 start.go:83] releasing machines lock for "default-k8s-diff-port-167150", held for 10.212970587s
	I1119 02:42:52.607148  306860 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-167150
	I1119 02:42:52.626059  306860 ssh_runner.go:195] Run: cat /version.json
	I1119 02:42:52.626109  306860 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-167150
	I1119 02:42:52.626132  306860 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 02:42:52.626195  306860 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-167150
	I1119 02:42:52.646677  306860 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/default-k8s-diff-port-167150/id_rsa Username:docker}
	I1119 02:42:52.647867  306860 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/default-k8s-diff-port-167150/id_rsa Username:docker}
	I1119 02:42:52.822035  306860 ssh_runner.go:195] Run: systemctl --version
	I1119 02:42:52.831419  306860 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 02:42:52.869148  306860 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 02:42:52.873990  306860 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 02:42:52.874068  306860 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 02:42:52.901044  306860 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1119 02:42:52.901066  306860 start.go:496] detecting cgroup driver to use...
	I1119 02:42:52.901097  306860 detect.go:190] detected "systemd" cgroup driver on host os
	I1119 02:42:52.901141  306860 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 02:42:52.917792  306860 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 02:42:52.932809  306860 docker.go:218] disabling cri-docker service (if available) ...
	I1119 02:42:52.932864  306860 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 02:42:52.953113  306860 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 02:42:52.974059  306860 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 02:42:53.085982  306860 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 02:42:53.191486  306860 docker.go:234] disabling docker service ...
	I1119 02:42:53.191545  306860 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 02:42:53.209965  306860 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 02:42:53.222536  306860 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 02:42:53.334426  306860 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 02:42:53.452134  306860 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 02:42:53.470021  306860 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 02:42:53.491692  306860 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 02:42:53.491759  306860 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:42:53.507808  306860 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1119 02:42:53.507878  306860 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:42:53.521160  306860 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:42:53.533686  306860 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:42:53.545419  306860 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 02:42:53.559221  306860 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:42:53.572537  306860 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:42:53.591930  306860 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:42:53.604233  306860 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 02:42:53.612761  306860 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 02:42:53.620567  306860 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:42:53.702418  306860 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 02:42:54.895903  306860 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.19344223s)
	I1119 02:42:54.895934  306860 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 02:42:54.895987  306860 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 02:42:54.899921  306860 start.go:564] Will wait 60s for crictl version
	I1119 02:42:54.899979  306860 ssh_runner.go:195] Run: which crictl
	I1119 02:42:54.903499  306860 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 02:42:54.927965  306860 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1119 02:42:54.928037  306860 ssh_runner.go:195] Run: crio --version
	I1119 02:42:54.960299  306860 ssh_runner.go:195] Run: crio --version
	I1119 02:42:55.000689  306860 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1119 02:42:51.242518  299668 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1: (1.36093376s)
	I1119 02:42:51.242553  299668 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21924-11126/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1119 02:42:51.242587  299668 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1119 02:42:51.242638  299668 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1119 02:42:51.884817  299668 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21924-11126/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1119 02:42:51.884865  299668 cache_images.go:125] Successfully loaded all cached images
	I1119 02:42:51.884872  299668 cache_images.go:94] duration metric: took 16.678403063s to LoadCachedImages
	I1119 02:42:51.884886  299668 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1119 02:42:51.884977  299668 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-837474 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-837474 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 02:42:51.885077  299668 ssh_runner.go:195] Run: crio config
	I1119 02:42:51.934055  299668 cni.go:84] Creating CNI manager for ""
	I1119 02:42:51.934075  299668 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 02:42:51.934089  299668 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 02:42:51.934107  299668 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-837474 NodeName:no-preload-837474 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 02:42:51.934256  299668 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-837474"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 02:42:51.934344  299668 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 02:42:51.942351  299668 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1119 02:42:51.942409  299668 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1119 02:42:51.950268  299668 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
	I1119 02:42:51.950341  299668 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/21924-11126/.minikube/cache/linux/amd64/v1.34.1/kubelet
	I1119 02:42:51.950376  299668 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21924-11126/.minikube/cache/linux/amd64/v1.34.1/kubeadm
	I1119 02:42:51.950348  299668 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1119 02:42:51.954459  299668 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1119 02:42:51.954493  299668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/cache/linux/amd64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (60559544 bytes)
	I1119 02:42:53.238137  299668 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 02:42:53.257679  299668 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1119 02:42:53.263721  299668 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1119 02:42:53.263752  299668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/cache/linux/amd64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (59195684 bytes)
	I1119 02:42:53.344069  299668 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1119 02:42:53.351667  299668 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1119 02:42:53.351703  299668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/cache/linux/amd64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (74027192 bytes)
	I1119 02:42:53.612715  299668 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 02:42:53.620479  299668 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1119 02:42:53.633087  299668 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 02:42:53.657867  299668 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1119 02:42:53.670102  299668 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1119 02:42:53.673427  299668 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 02:42:53.683353  299668 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:42:53.768236  299668 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 02:42:53.789788  299668 certs.go:69] Setting up /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474 for IP: 192.168.103.2
	I1119 02:42:53.789809  299668 certs.go:195] generating shared ca certs ...
	I1119 02:42:53.789829  299668 certs.go:227] acquiring lock for ca certs: {Name:mkfa94aafd627ee4c1a185e05aa520339d3c22d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:42:53.789987  299668 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21924-11126/.minikube/ca.key
	I1119 02:42:53.790033  299668 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21924-11126/.minikube/proxy-client-ca.key
	I1119 02:42:53.790044  299668 certs.go:257] generating profile certs ...
	I1119 02:42:53.790109  299668 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/client.key
	I1119 02:42:53.790124  299668 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/client.crt with IP's: []
	I1119 02:42:54.153349  299668 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/client.crt ...
	I1119 02:42:54.153376  299668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/client.crt: {Name:mk582fda973473014e16fbac704f7616a0f6aa62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:42:54.162415  299668 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/client.key ...
	I1119 02:42:54.162455  299668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/client.key: {Name:mkf82ec201b7ec108f85e3c1cb709e2e0c644536 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:42:54.162615  299668 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/apiserver.key.2f093449
	I1119 02:42:54.162634  299668 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/apiserver.crt.2f093449 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1119 02:42:50.957718  291163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:42:51.457622  291163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:42:51.958197  291163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:42:52.457608  291163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:42:52.957737  291163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:42:53.457646  291163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:42:53.957900  291163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:42:54.457538  291163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:42:54.957631  291163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:42:55.052333  291163 kubeadm.go:1114] duration metric: took 12.695568902s to wait for elevateKubeSystemPrivileges
	I1119 02:42:55.052368  291163 kubeadm.go:403] duration metric: took 26.311686714s to StartCluster
	I1119 02:42:55.052395  291163 settings.go:142] acquiring lock: {Name:mk39bd61177f2f8808b442f427cccc3032092975 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:42:55.052484  291163 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21924-11126/kubeconfig
	I1119 02:42:55.053537  291163 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/kubeconfig: {Name:mkb0d0ec188d51add3fa67ae624c9c11581068d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:42:55.053789  291163 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1119 02:42:55.053803  291163 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 02:42:55.053872  291163 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 02:42:55.053963  291163 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-987573"
	I1119 02:42:55.053987  291163 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-987573"
	I1119 02:42:55.054018  291163 host.go:66] Checking if "old-k8s-version-987573" exists ...
	I1119 02:42:55.054054  291163 config.go:182] Loaded profile config "old-k8s-version-987573": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1119 02:42:55.054262  291163 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-987573"
	I1119 02:42:55.054313  291163 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-987573"
	I1119 02:42:55.054691  291163 cli_runner.go:164] Run: docker container inspect old-k8s-version-987573 --format={{.State.Status}}
	I1119 02:42:55.054736  291163 cli_runner.go:164] Run: docker container inspect old-k8s-version-987573 --format={{.State.Status}}
	I1119 02:42:55.058586  291163 out.go:179] * Verifying Kubernetes components...
	I1119 02:42:55.060065  291163 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:42:55.084656  291163 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-987573"
	I1119 02:42:55.084747  291163 host.go:66] Checking if "old-k8s-version-987573" exists ...
	I1119 02:42:55.085405  291163 cli_runner.go:164] Run: docker container inspect old-k8s-version-987573 --format={{.State.Status}}
	I1119 02:42:55.085634  291163 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 02:42:55.086927  291163 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 02:42:55.086947  291163 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 02:42:55.086995  291163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-987573
	I1119 02:42:55.121554  291163 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 02:42:55.121580  291163 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 02:42:55.121762  291163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-987573
	I1119 02:42:55.128371  291163 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/old-k8s-version-987573/id_rsa Username:docker}
	I1119 02:42:55.160205  291163 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/old-k8s-version-987573/id_rsa Username:docker}
	I1119 02:42:55.181208  291163 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1119 02:42:55.259110  291163 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 02:42:55.264651  291163 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 02:42:55.282490  291163 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 02:42:55.568676  291163 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1119 02:42:55.569719  291163 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-987573" to be "Ready" ...
	I1119 02:42:55.795625  291163 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1119 02:42:55.796948  291163 addons.go:515] duration metric: took 743.057906ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1119 02:42:54.248395  302848 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1119 02:42:54.248580  302848 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-811173 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1119 02:42:54.313308  302848 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1119 02:42:54.706382  302848 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1119 02:42:54.983151  302848 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1119 02:42:54.983371  302848 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1119 02:42:55.301965  302848 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1119 02:42:55.490617  302848 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1119 02:42:55.599136  302848 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1119 02:42:55.872895  302848 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1119 02:42:56.305311  302848 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1119 02:42:56.308494  302848 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1119 02:42:56.312387  302848 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1119 02:42:55.174521  299668 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/apiserver.crt.2f093449 ...
	I1119 02:42:55.174557  299668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/apiserver.crt.2f093449: {Name:mk5097a5f345e6abc2d685019cd0e0e0dd64d577 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:42:55.174776  299668 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/apiserver.key.2f093449 ...
	I1119 02:42:55.174793  299668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/apiserver.key.2f093449: {Name:mkab8fc1530b6e08d3a7078856d1f9ebfde15951 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:42:55.174905  299668 certs.go:382] copying /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/apiserver.crt.2f093449 -> /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/apiserver.crt
	I1119 02:42:55.174995  299668 certs.go:386] copying /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/apiserver.key.2f093449 -> /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/apiserver.key
	I1119 02:42:55.175062  299668 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/proxy-client.key
	I1119 02:42:55.175088  299668 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/proxy-client.crt with IP's: []
	I1119 02:42:55.677842  299668 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/proxy-client.crt ...
	I1119 02:42:55.677879  299668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/proxy-client.crt: {Name:mkecc3d139808fcfd56c1c505daef9b4314f266d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:42:55.678058  299668 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/proxy-client.key ...
	I1119 02:42:55.678074  299668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/proxy-client.key: {Name:mkfd946463670be5706400ebe2ff5e4540ed9b5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:42:55.678301  299668 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/14634.pem (1338 bytes)
	W1119 02:42:55.678346  299668 certs.go:480] ignoring /home/jenkins/minikube-integration/21924-11126/.minikube/certs/14634_empty.pem, impossibly tiny 0 bytes
	I1119 02:42:55.678360  299668 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca-key.pem (1671 bytes)
	I1119 02:42:55.678394  299668 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem (1082 bytes)
	I1119 02:42:55.678425  299668 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/cert.pem (1123 bytes)
	I1119 02:42:55.678472  299668 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/key.pem (1675 bytes)
	I1119 02:42:55.678534  299668 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/files/etc/ssl/certs/146342.pem (1708 bytes)
	I1119 02:42:55.679296  299668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 02:42:55.700801  299668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1119 02:42:55.720342  299668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 02:42:55.741236  299668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1119 02:42:55.764042  299668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1119 02:42:55.785834  299668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1671 bytes)
	I1119 02:42:55.807648  299668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 02:42:55.827008  299668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1119 02:42:55.845962  299668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 02:42:55.864695  299668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/certs/14634.pem --> /usr/share/ca-certificates/14634.pem (1338 bytes)
	I1119 02:42:55.881798  299668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/files/etc/ssl/certs/146342.pem --> /usr/share/ca-certificates/146342.pem (1708 bytes)
	I1119 02:42:55.898727  299668 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 02:42:55.910163  299668 ssh_runner.go:195] Run: openssl version
	I1119 02:42:55.915785  299668 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14634.pem && ln -fs /usr/share/ca-certificates/14634.pem /etc/ssl/certs/14634.pem"
	I1119 02:42:55.923580  299668 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14634.pem
	I1119 02:42:55.926945  299668 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 02:02 /usr/share/ca-certificates/14634.pem
	I1119 02:42:55.927022  299668 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14634.pem
	I1119 02:42:55.969227  299668 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14634.pem /etc/ssl/certs/51391683.0"
	I1119 02:42:55.978464  299668 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/146342.pem && ln -fs /usr/share/ca-certificates/146342.pem /etc/ssl/certs/146342.pem"
	I1119 02:42:55.988370  299668 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/146342.pem
	I1119 02:42:55.992980  299668 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 02:02 /usr/share/ca-certificates/146342.pem
	I1119 02:42:55.993028  299668 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/146342.pem
	I1119 02:42:56.051633  299668 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/146342.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 02:42:56.065808  299668 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 02:42:56.079199  299668 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:42:56.084981  299668 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 01:56 /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:42:56.085033  299668 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:42:56.140499  299668 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 02:42:56.151987  299668 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 02:42:56.156998  299668 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1119 02:42:56.157063  299668 kubeadm.go:401] StartCluster: {Name:no-preload-837474 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-837474 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:42:56.157164  299668 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 02:42:56.157224  299668 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 02:42:56.191409  299668 cri.go:89] found id: ""
	I1119 02:42:56.191487  299668 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 02:42:56.203572  299668 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1119 02:42:56.214503  299668 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1119 02:42:56.214560  299668 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1119 02:42:56.224485  299668 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1119 02:42:56.224520  299668 kubeadm.go:158] found existing configuration files:
	
	I1119 02:42:56.224563  299668 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1119 02:42:56.234337  299668 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1119 02:42:56.234389  299668 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1119 02:42:56.243718  299668 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1119 02:42:56.254141  299668 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1119 02:42:56.254192  299668 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1119 02:42:56.263696  299668 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1119 02:42:56.273116  299668 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1119 02:42:56.273160  299668 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1119 02:42:56.281275  299668 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1119 02:42:56.290803  299668 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1119 02:42:56.290848  299668 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1119 02:42:56.300377  299668 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1119 02:42:56.355983  299668 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1119 02:42:56.356057  299668 kubeadm.go:319] [preflight] Running pre-flight checks
	I1119 02:42:56.389799  299668 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1119 02:42:56.389890  299668 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1119 02:42:56.389940  299668 kubeadm.go:319] OS: Linux
	I1119 02:42:56.390011  299668 kubeadm.go:319] CGROUPS_CPU: enabled
	I1119 02:42:56.390069  299668 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1119 02:42:56.390131  299668 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1119 02:42:56.390190  299668 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1119 02:42:56.390253  299668 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1119 02:42:56.390334  299668 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1119 02:42:56.390396  299668 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1119 02:42:56.390484  299668 kubeadm.go:319] CGROUPS_IO: enabled
	I1119 02:42:56.476300  299668 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1119 02:42:56.476471  299668 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1119 02:42:56.476678  299668 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1119 02:42:56.498223  299668 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1119 02:42:55.001904  306860 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-167150 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 02:42:55.019551  306860 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1119 02:42:55.023819  306860 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 02:42:55.035169  306860 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-167150 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-167150 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 02:42:55.035294  306860 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 02:42:55.035349  306860 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 02:42:55.082998  306860 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 02:42:55.083033  306860 crio.go:433] Images already preloaded, skipping extraction
	I1119 02:42:55.083093  306860 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 02:42:55.133091  306860 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 02:42:55.133117  306860 cache_images.go:86] Images are preloaded, skipping loading
	I1119 02:42:55.133127  306860 kubeadm.go:935] updating node { 192.168.94.2 8444 v1.34.1 crio true true} ...
	I1119 02:42:55.133229  306860 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-167150 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-167150 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 02:42:55.133303  306860 ssh_runner.go:195] Run: crio config
	I1119 02:42:55.202350  306860 cni.go:84] Creating CNI manager for ""
	I1119 02:42:55.202422  306860 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 02:42:55.202527  306860 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 02:42:55.202583  306860 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-167150 NodeName:default-k8s-diff-port-167150 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 02:42:55.202750  306860 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-167150"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 02:42:55.202816  306860 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 02:42:55.212677  306860 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 02:42:55.212740  306860 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 02:42:55.222763  306860 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1119 02:42:55.238734  306860 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 02:42:55.263173  306860 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1119 02:42:55.284386  306860 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1119 02:42:55.294186  306860 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 02:42:55.309928  306860 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:42:55.457096  306860 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 02:42:55.486617  306860 certs.go:69] Setting up /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150 for IP: 192.168.94.2
	I1119 02:42:55.486643  306860 certs.go:195] generating shared ca certs ...
	I1119 02:42:55.486664  306860 certs.go:227] acquiring lock for ca certs: {Name:mkfa94aafd627ee4c1a185e05aa520339d3c22d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:42:55.486870  306860 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21924-11126/.minikube/ca.key
	I1119 02:42:55.486993  306860 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21924-11126/.minikube/proxy-client-ca.key
	I1119 02:42:55.487012  306860 certs.go:257] generating profile certs ...
	I1119 02:42:55.487088  306860 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/client.key
	I1119 02:42:55.487102  306860 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/client.crt with IP's: []
	I1119 02:42:56.094930  306860 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/client.crt ...
	I1119 02:42:56.094965  306860 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/client.crt: {Name:mk026804441dc7b69d5672d318a7041c3c66d037 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:42:56.095134  306860 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/client.key ...
	I1119 02:42:56.095149  306860 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/client.key: {Name:mk48f5330ed931b78c15c78cffd61daf6c38116c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:42:56.095247  306860 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/apiserver.key.c3ecd8f4
	I1119 02:42:56.095265  306860 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/apiserver.crt.c3ecd8f4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1119 02:42:56.225092  306860 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/apiserver.crt.c3ecd8f4 ...
	I1119 02:42:56.225159  306860 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/apiserver.crt.c3ecd8f4: {Name:mk96b6176b7d10d9bf2189cc1a892c03f023c6bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:42:56.225342  306860 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/apiserver.key.c3ecd8f4 ...
	I1119 02:42:56.225363  306860 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/apiserver.key.c3ecd8f4: {Name:mk1968f40809874a1e5baaa63347f3037839ec18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:42:56.225677  306860 certs.go:382] copying /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/apiserver.crt.c3ecd8f4 -> /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/apiserver.crt
	I1119 02:42:56.225860  306860 certs.go:386] copying /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/apiserver.key.c3ecd8f4 -> /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/apiserver.key
	I1119 02:42:56.226000  306860 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/proxy-client.key
	I1119 02:42:56.226018  306860 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/proxy-client.crt with IP's: []
	I1119 02:42:56.364736  306860 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/proxy-client.crt ...
	I1119 02:42:56.364766  306860 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/proxy-client.crt: {Name:mk250838ee0813d8a1018cfdbc728e6a6682cbe2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:42:56.364947  306860 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/proxy-client.key ...
	I1119 02:42:56.364966  306860 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/proxy-client.key: {Name:mkf8d3d5c9e799a5f275d845a37b4700ad82ae66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:42:56.365187  306860 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/14634.pem (1338 bytes)
	W1119 02:42:56.365235  306860 certs.go:480] ignoring /home/jenkins/minikube-integration/21924-11126/.minikube/certs/14634_empty.pem, impossibly tiny 0 bytes
	I1119 02:42:56.365250  306860 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca-key.pem (1671 bytes)
	I1119 02:42:56.365288  306860 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem (1082 bytes)
	I1119 02:42:56.365320  306860 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/cert.pem (1123 bytes)
	I1119 02:42:56.365352  306860 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/key.pem (1675 bytes)
	I1119 02:42:56.365408  306860 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/files/etc/ssl/certs/146342.pem (1708 bytes)
	I1119 02:42:56.365996  306860 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 02:42:56.390329  306860 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1119 02:42:56.417649  306860 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 02:42:56.439510  306860 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1119 02:42:56.464545  306860 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1119 02:42:56.495174  306860 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1119 02:42:56.522898  306860 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 02:42:56.545477  306860 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1119 02:42:56.569966  306860 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/certs/14634.pem --> /usr/share/ca-certificates/14634.pem (1338 bytes)
	I1119 02:42:56.596790  306860 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/files/etc/ssl/certs/146342.pem --> /usr/share/ca-certificates/146342.pem (1708 bytes)
	I1119 02:42:56.618988  306860 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 02:42:56.641382  306860 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 02:42:56.659625  306860 ssh_runner.go:195] Run: openssl version
	I1119 02:42:56.667985  306860 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/146342.pem && ln -fs /usr/share/ca-certificates/146342.pem /etc/ssl/certs/146342.pem"
	I1119 02:42:56.677102  306860 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/146342.pem
	I1119 02:42:56.680868  306860 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 02:02 /usr/share/ca-certificates/146342.pem
	I1119 02:42:56.680921  306860 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/146342.pem
	I1119 02:42:56.728253  306860 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/146342.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 02:42:56.738101  306860 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 02:42:56.748790  306860 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:42:56.753545  306860 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 01:56 /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:42:56.753606  306860 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:42:56.810205  306860 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 02:42:56.821949  306860 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14634.pem && ln -fs /usr/share/ca-certificates/14634.pem /etc/ssl/certs/14634.pem"
	I1119 02:42:56.833110  306860 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14634.pem
	I1119 02:42:56.838128  306860 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 02:02 /usr/share/ca-certificates/14634.pem
	I1119 02:42:56.838183  306860 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14634.pem
	I1119 02:42:56.891211  306860 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14634.pem /etc/ssl/certs/51391683.0"
	I1119 02:42:56.903114  306860 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 02:42:56.907959  306860 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1119 02:42:56.908012  306860 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-167150 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-167150 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:42:56.908102  306860 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 02:42:56.908149  306860 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 02:42:56.940502  306860 cri.go:89] found id: ""
	I1119 02:42:56.940561  306860 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 02:42:56.950549  306860 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1119 02:42:56.960914  306860 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1119 02:42:56.960969  306860 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1119 02:42:56.971164  306860 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1119 02:42:56.971180  306860 kubeadm.go:158] found existing configuration files:
	
	I1119 02:42:56.971221  306860 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1119 02:42:56.981206  306860 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1119 02:42:56.981266  306860 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1119 02:42:56.990677  306860 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1119 02:42:57.001004  306860 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1119 02:42:57.001054  306860 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1119 02:42:57.011142  306860 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1119 02:42:57.022773  306860 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1119 02:42:57.022824  306860 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1119 02:42:57.033930  306860 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1119 02:42:57.043496  306860 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1119 02:42:57.043549  306860 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1119 02:42:57.052850  306860 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1119 02:42:57.102312  306860 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1119 02:42:57.102384  306860 kubeadm.go:319] [preflight] Running pre-flight checks
	I1119 02:42:57.124619  306860 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1119 02:42:57.124731  306860 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1119 02:42:57.124806  306860 kubeadm.go:319] OS: Linux
	I1119 02:42:57.124877  306860 kubeadm.go:319] CGROUPS_CPU: enabled
	I1119 02:42:57.124940  306860 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1119 02:42:57.125010  306860 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1119 02:42:57.125075  306860 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1119 02:42:57.125121  306860 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1119 02:42:57.125176  306860 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1119 02:42:57.125246  306860 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1119 02:42:57.125304  306860 kubeadm.go:319] CGROUPS_IO: enabled
	I1119 02:42:57.195789  306860 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1119 02:42:57.195928  306860 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1119 02:42:57.196075  306860 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1119 02:42:57.203186  306860 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1119 02:42:56.313962  302848 out.go:252]   - Booting up control plane ...
	I1119 02:42:56.314089  302848 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1119 02:42:56.314233  302848 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1119 02:42:56.315640  302848 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1119 02:42:56.334919  302848 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1119 02:42:56.335093  302848 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1119 02:42:56.347888  302848 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1119 02:42:56.348202  302848 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1119 02:42:56.348467  302848 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1119 02:42:56.489302  302848 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1119 02:42:56.489520  302848 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1119 02:42:57.490788  302848 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001707522s
	I1119 02:42:57.494204  302848 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1119 02:42:57.494338  302848 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1119 02:42:57.494504  302848 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1119 02:42:57.494636  302848 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1119 02:42:56.501424  299668 out.go:252]   - Generating certificates and keys ...
	I1119 02:42:56.501541  299668 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1119 02:42:56.501670  299668 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1119 02:42:56.649197  299668 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1119 02:42:57.131296  299668 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1119 02:42:57.360417  299668 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1119 02:42:57.537498  299668 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1119 02:42:57.630421  299668 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1119 02:42:57.630669  299668 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-837474] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1119 02:42:57.690142  299668 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1119 02:42:57.692964  299668 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-837474] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1119 02:42:58.271962  299668 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1119 02:42:58.474942  299668 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1119 02:42:58.759980  299668 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1119 02:42:58.760242  299668 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1119 02:42:59.509507  299668 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1119 02:42:56.077921  291163 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-987573" context rescaled to 1 replicas
	W1119 02:42:57.695200  291163 node_ready.go:57] node "old-k8s-version-987573" has "Ready":"False" status (will retry)
	W1119 02:43:00.073408  291163 node_ready.go:57] node "old-k8s-version-987573" has "Ready":"False" status (will retry)
	I1119 02:42:59.510574  302848 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.016320605s
	I1119 02:43:00.061250  302848 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.566984077s
	I1119 02:43:00.995299  302848 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501086445s
	I1119 02:43:01.005851  302848 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1119 02:43:01.015707  302848 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1119 02:43:01.023229  302848 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1119 02:43:01.023570  302848 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-811173 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1119 02:43:01.031334  302848 kubeadm.go:319] [bootstrap-token] Using token: 7mjhrd.yzq9kll5v9huaptf
	I1119 02:43:00.399900  299668 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1119 02:43:01.316795  299668 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1119 02:43:01.487746  299668 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1119 02:43:01.585498  299668 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1119 02:43:01.586110  299668 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1119 02:43:01.590136  299668 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1119 02:42:57.204524  306860 out.go:252]   - Generating certificates and keys ...
	I1119 02:42:57.204623  306860 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1119 02:42:57.204687  306860 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1119 02:42:57.340602  306860 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1119 02:42:57.763784  306860 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1119 02:42:58.132475  306860 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1119 02:42:58.496067  306860 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1119 02:42:59.065287  306860 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1119 02:42:59.065574  306860 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-167150 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1119 02:42:59.997463  306860 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1119 02:42:59.997634  306860 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-167150 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1119 02:43:00.551535  306860 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1119 02:43:00.590706  306860 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1119 02:43:00.670505  306860 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1119 02:43:00.670748  306860 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1119 02:43:00.836954  306860 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1119 02:43:00.975878  306860 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1119 02:43:01.234661  306860 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1119 02:43:01.776990  306860 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1119 02:43:01.935581  306860 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1119 02:43:01.936081  306860 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1119 02:43:01.939514  306860 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1119 02:43:01.942389  306860 out.go:252]   - Booting up control plane ...
	I1119 02:43:01.942532  306860 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1119 02:43:01.942649  306860 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1119 02:43:01.942759  306860 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1119 02:43:01.957695  306860 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1119 02:43:01.957851  306860 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1119 02:43:01.964809  306860 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1119 02:43:01.966421  306860 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1119 02:43:01.966510  306860 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1119 02:43:02.081897  306860 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1119 02:43:02.082048  306860 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1119 02:43:01.032638  302848 out.go:252]   - Configuring RBAC rules ...
	I1119 02:43:01.032800  302848 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1119 02:43:01.035624  302848 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1119 02:43:01.040457  302848 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1119 02:43:01.043182  302848 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1119 02:43:01.045472  302848 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1119 02:43:01.048002  302848 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1119 02:43:01.401444  302848 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1119 02:43:01.820457  302848 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1119 02:43:02.401502  302848 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1119 02:43:02.402643  302848 kubeadm.go:319] 
	I1119 02:43:02.402737  302848 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1119 02:43:02.402774  302848 kubeadm.go:319] 
	I1119 02:43:02.402905  302848 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1119 02:43:02.402932  302848 kubeadm.go:319] 
	I1119 02:43:02.402964  302848 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1119 02:43:02.403044  302848 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1119 02:43:02.403131  302848 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1119 02:43:02.403146  302848 kubeadm.go:319] 
	I1119 02:43:02.403216  302848 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1119 02:43:02.403225  302848 kubeadm.go:319] 
	I1119 02:43:02.403289  302848 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1119 02:43:02.403297  302848 kubeadm.go:319] 
	I1119 02:43:02.403367  302848 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1119 02:43:02.403490  302848 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1119 02:43:02.403605  302848 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1119 02:43:02.403613  302848 kubeadm.go:319] 
	I1119 02:43:02.403712  302848 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1119 02:43:02.403838  302848 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1119 02:43:02.403855  302848 kubeadm.go:319] 
	I1119 02:43:02.403968  302848 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 7mjhrd.yzq9kll5v9huaptf \
	I1119 02:43:02.404116  302848 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4165d7789cb5b41e2fdbecffedd9aece71a7c390b7f673c978e979d9f1b4ab32 \
	I1119 02:43:02.404149  302848 kubeadm.go:319] 	--control-plane 
	I1119 02:43:02.404153  302848 kubeadm.go:319] 
	I1119 02:43:02.404265  302848 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1119 02:43:02.404277  302848 kubeadm.go:319] 
	I1119 02:43:02.404388  302848 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 7mjhrd.yzq9kll5v9huaptf \
	I1119 02:43:02.404566  302848 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4165d7789cb5b41e2fdbecffedd9aece71a7c390b7f673c978e979d9f1b4ab32 
	I1119 02:43:02.407773  302848 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1119 02:43:02.407946  302848 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1119 02:43:02.407964  302848 cni.go:84] Creating CNI manager for ""
	I1119 02:43:02.407972  302848 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 02:43:02.410242  302848 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1119 02:43:02.411389  302848 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1119 02:43:02.416029  302848 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1119 02:43:02.416045  302848 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1119 02:43:02.434391  302848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1119 02:43:02.635779  302848 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1119 02:43:02.635869  302848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:43:02.635895  302848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-811173 minikube.k8s.io/updated_at=2025_11_19T02_43_02_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=ebd49efe8b7d44f89fc96e21beffcd64dfee8277 minikube.k8s.io/name=embed-certs-811173 minikube.k8s.io/primary=true
	I1119 02:43:02.646141  302848 ops.go:34] apiserver oom_adj: -16
	I1119 02:43:02.701476  302848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:43:03.201546  302848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:43:03.701526  302848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:43:01.593467  299668 out.go:252]   - Booting up control plane ...
	I1119 02:43:01.593615  299668 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1119 02:43:01.593731  299668 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1119 02:43:01.593821  299668 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1119 02:43:01.609953  299668 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1119 02:43:01.610136  299668 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1119 02:43:01.617306  299668 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1119 02:43:01.617705  299668 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1119 02:43:01.617773  299668 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1119 02:43:01.745744  299668 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1119 02:43:01.745917  299668 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1119 02:43:02.749850  299668 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.00218898s
	I1119 02:43:02.753994  299668 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1119 02:43:02.754137  299668 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1119 02:43:02.754320  299668 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1119 02:43:02.754458  299668 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1119 02:43:04.243363  299668 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.489187962s
	I1119 02:43:05.042174  299668 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.288115678s
	W1119 02:43:02.073659  291163 node_ready.go:57] node "old-k8s-version-987573" has "Ready":"False" status (will retry)
	W1119 02:43:04.075000  291163 node_ready.go:57] node "old-k8s-version-987573" has "Ready":"False" status (will retry)
	I1119 02:43:06.755785  299668 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.00176291s
	I1119 02:43:06.768184  299668 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1119 02:43:06.778618  299668 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1119 02:43:06.786476  299668 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1119 02:43:06.786680  299668 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-837474 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1119 02:43:06.793671  299668 kubeadm.go:319] [bootstrap-token] Using token: 9fycjj.9ujoqc3x92l2ibft
	I1119 02:43:02.583638  306860 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.872402ms
	I1119 02:43:02.588260  306860 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1119 02:43:02.588375  306860 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8444/livez
	I1119 02:43:02.588528  306860 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1119 02:43:02.588631  306860 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1119 02:43:04.140696  306860 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.552291183s
	I1119 02:43:05.150994  306860 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.562670686s
	I1119 02:43:07.089548  306860 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.501158948s
	I1119 02:43:07.101719  306860 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1119 02:43:07.110570  306860 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1119 02:43:07.118309  306860 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1119 02:43:07.118633  306860 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-167150 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1119 02:43:07.128002  306860 kubeadm.go:319] [bootstrap-token] Using token: waagng.bgqyeddkg8xbkifv
	I1119 02:43:07.129465  306860 out.go:252]   - Configuring RBAC rules ...
	I1119 02:43:07.129641  306860 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1119 02:43:07.132357  306860 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1119 02:43:07.138676  306860 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1119 02:43:07.142447  306860 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1119 02:43:07.143596  306860 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1119 02:43:07.145985  306860 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1119 02:43:04.202036  302848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:43:04.702118  302848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:43:05.201577  302848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:43:05.702195  302848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:43:06.202066  302848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:43:06.701602  302848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:43:07.202550  302848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:43:07.292809  302848 kubeadm.go:1114] duration metric: took 4.657004001s to wait for elevateKubeSystemPrivileges
	I1119 02:43:07.292851  302848 kubeadm.go:403] duration metric: took 15.846648283s to StartCluster
	I1119 02:43:07.292874  302848 settings.go:142] acquiring lock: {Name:mk39bd61177f2f8808b442f427cccc3032092975 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:43:07.292952  302848 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21924-11126/kubeconfig
	I1119 02:43:07.294786  302848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/kubeconfig: {Name:mkb0d0ec188d51add3fa67ae624c9c11581068d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:43:07.295068  302848 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 02:43:07.295192  302848 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 02:43:07.295259  302848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1119 02:43:07.295275  302848 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-811173"
	I1119 02:43:07.295295  302848 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-811173"
	I1119 02:43:07.295325  302848 host.go:66] Checking if "embed-certs-811173" exists ...
	I1119 02:43:07.295866  302848 addons.go:70] Setting default-storageclass=true in profile "embed-certs-811173"
	I1119 02:43:07.295887  302848 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-811173"
	I1119 02:43:07.295930  302848 cli_runner.go:164] Run: docker container inspect embed-certs-811173 --format={{.State.Status}}
	I1119 02:43:07.296292  302848 config.go:182] Loaded profile config "embed-certs-811173": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:43:07.296344  302848 cli_runner.go:164] Run: docker container inspect embed-certs-811173 --format={{.State.Status}}
	I1119 02:43:07.297705  302848 out.go:179] * Verifying Kubernetes components...
	I1119 02:43:07.299117  302848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:43:07.331934  302848 addons.go:239] Setting addon default-storageclass=true in "embed-certs-811173"
	I1119 02:43:07.331974  302848 host.go:66] Checking if "embed-certs-811173" exists ...
	I1119 02:43:07.332295  302848 cli_runner.go:164] Run: docker container inspect embed-certs-811173 --format={{.State.Status}}
	I1119 02:43:07.332844  302848 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 02:43:07.334167  302848 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 02:43:07.334188  302848 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 02:43:07.334241  302848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-811173
	I1119 02:43:07.362524  302848 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 02:43:07.362762  302848 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 02:43:07.362850  302848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-811173
	I1119 02:43:07.364663  302848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/embed-certs-811173/id_rsa Username:docker}
	I1119 02:43:07.388411  302848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/embed-certs-811173/id_rsa Username:docker}
	I1119 02:43:07.411165  302848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1119 02:43:07.483920  302848 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 02:43:07.503288  302848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 02:43:07.513295  302848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 02:43:07.651779  302848 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1119 02:43:07.654104  302848 node_ready.go:35] waiting up to 6m0s for node "embed-certs-811173" to be "Ready" ...
	I1119 02:43:07.881305  302848 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1119 02:43:06.795001  299668 out.go:252]   - Configuring RBAC rules ...
	I1119 02:43:06.795151  299668 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1119 02:43:06.797762  299668 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1119 02:43:06.802768  299668 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1119 02:43:06.805038  299668 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1119 02:43:06.807078  299668 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1119 02:43:06.809131  299668 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1119 02:43:07.162003  299668 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1119 02:43:07.591067  299668 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1119 02:43:08.161713  299668 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1119 02:43:08.162667  299668 kubeadm.go:319] 
	I1119 02:43:08.162773  299668 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1119 02:43:08.162792  299668 kubeadm.go:319] 
	I1119 02:43:08.162919  299668 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1119 02:43:08.162929  299668 kubeadm.go:319] 
	I1119 02:43:08.162968  299668 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1119 02:43:08.163054  299668 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1119 02:43:08.163127  299668 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1119 02:43:08.163135  299668 kubeadm.go:319] 
	I1119 02:43:08.163218  299668 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1119 02:43:08.163232  299668 kubeadm.go:319] 
	I1119 02:43:08.163270  299668 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1119 02:43:08.163276  299668 kubeadm.go:319] 
	I1119 02:43:08.163318  299668 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1119 02:43:08.163382  299668 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1119 02:43:08.163483  299668 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1119 02:43:08.163500  299668 kubeadm.go:319] 
	I1119 02:43:08.163615  299668 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1119 02:43:08.163733  299668 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1119 02:43:08.163746  299668 kubeadm.go:319] 
	I1119 02:43:08.163885  299668 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 9fycjj.9ujoqc3x92l2ibft \
	I1119 02:43:08.164006  299668 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4165d7789cb5b41e2fdbecffedd9aece71a7c390b7f673c978e979d9f1b4ab32 \
	I1119 02:43:08.164041  299668 kubeadm.go:319] 	--control-plane 
	I1119 02:43:08.164050  299668 kubeadm.go:319] 
	I1119 02:43:08.164194  299668 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1119 02:43:08.164206  299668 kubeadm.go:319] 
	I1119 02:43:08.164311  299668 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 9fycjj.9ujoqc3x92l2ibft \
	I1119 02:43:08.164401  299668 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4165d7789cb5b41e2fdbecffedd9aece71a7c390b7f673c978e979d9f1b4ab32 
	I1119 02:43:08.166559  299668 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1119 02:43:08.166685  299668 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1119 02:43:08.166716  299668 cni.go:84] Creating CNI manager for ""
	I1119 02:43:08.166726  299668 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 02:43:08.169105  299668 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1119 02:43:07.495981  306860 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1119 02:43:07.914284  306860 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1119 02:43:08.495511  306860 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1119 02:43:08.496414  306860 kubeadm.go:319] 
	I1119 02:43:08.496519  306860 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1119 02:43:08.496532  306860 kubeadm.go:319] 
	I1119 02:43:08.496630  306860 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1119 02:43:08.496640  306860 kubeadm.go:319] 
	I1119 02:43:08.496692  306860 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1119 02:43:08.496819  306860 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1119 02:43:08.496900  306860 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1119 02:43:08.496910  306860 kubeadm.go:319] 
	I1119 02:43:08.497001  306860 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1119 02:43:08.497011  306860 kubeadm.go:319] 
	I1119 02:43:08.497081  306860 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1119 02:43:08.497091  306860 kubeadm.go:319] 
	I1119 02:43:08.497172  306860 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1119 02:43:08.497303  306860 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1119 02:43:08.497404  306860 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1119 02:43:08.497414  306860 kubeadm.go:319] 
	I1119 02:43:08.497561  306860 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1119 02:43:08.497664  306860 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1119 02:43:08.497674  306860 kubeadm.go:319] 
	I1119 02:43:08.497789  306860 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token waagng.bgqyeddkg8xbkifv \
	I1119 02:43:08.497949  306860 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4165d7789cb5b41e2fdbecffedd9aece71a7c390b7f673c978e979d9f1b4ab32 \
	I1119 02:43:08.497979  306860 kubeadm.go:319] 	--control-plane 
	I1119 02:43:08.497987  306860 kubeadm.go:319] 
	I1119 02:43:08.498113  306860 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1119 02:43:08.498121  306860 kubeadm.go:319] 
	I1119 02:43:08.498211  306860 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token waagng.bgqyeddkg8xbkifv \
	I1119 02:43:08.498313  306860 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4165d7789cb5b41e2fdbecffedd9aece71a7c390b7f673c978e979d9f1b4ab32 
	I1119 02:43:08.500938  306860 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1119 02:43:08.501038  306860 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1119 02:43:08.501062  306860 cni.go:84] Creating CNI manager for ""
	I1119 02:43:08.501071  306860 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 02:43:08.502415  306860 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1119 02:43:07.882405  302848 addons.go:515] duration metric: took 587.224612ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1119 02:43:08.155743  302848 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-811173" context rescaled to 1 replicas
	I1119 02:43:08.170011  299668 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1119 02:43:08.174308  299668 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1119 02:43:08.174323  299668 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1119 02:43:08.187641  299668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1119 02:43:08.394639  299668 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1119 02:43:08.394749  299668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:43:08.394806  299668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-837474 minikube.k8s.io/updated_at=2025_11_19T02_43_08_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=ebd49efe8b7d44f89fc96e21beffcd64dfee8277 minikube.k8s.io/name=no-preload-837474 minikube.k8s.io/primary=true
	I1119 02:43:08.404680  299668 ops.go:34] apiserver oom_adj: -16
	I1119 02:43:08.461588  299668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:43:08.962254  299668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:43:09.461759  299668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:43:09.961662  299668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1119 02:43:06.573722  291163 node_ready.go:57] node "old-k8s-version-987573" has "Ready":"False" status (will retry)
	I1119 02:43:08.072741  291163 node_ready.go:49] node "old-k8s-version-987573" is "Ready"
	I1119 02:43:08.072770  291163 node_ready.go:38] duration metric: took 12.502973194s for node "old-k8s-version-987573" to be "Ready" ...
	I1119 02:43:08.072782  291163 api_server.go:52] waiting for apiserver process to appear ...
	I1119 02:43:08.072824  291163 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 02:43:08.085646  291163 api_server.go:72] duration metric: took 13.03179653s to wait for apiserver process to appear ...
	I1119 02:43:08.085675  291163 api_server.go:88] waiting for apiserver healthz status ...
	I1119 02:43:08.085696  291163 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 02:43:08.090892  291163 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1119 02:43:08.091918  291163 api_server.go:141] control plane version: v1.28.0
	I1119 02:43:08.091942  291163 api_server.go:131] duration metric: took 6.259879ms to wait for apiserver health ...
	I1119 02:43:08.091952  291163 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 02:43:08.095373  291163 system_pods.go:59] 8 kube-system pods found
	I1119 02:43:08.095414  291163 system_pods.go:61] "coredns-5dd5756b68-djd8r" [38b8c793-304e-42c1-b2a0-ecd1032a5962] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:43:08.095426  291163 system_pods.go:61] "etcd-old-k8s-version-987573" [f7ae9d5c-07c4-402e-97ea-a50375ebc9c3] Running
	I1119 02:43:08.095449  291163 system_pods.go:61] "kindnet-57t4v" [0db2f280-bd80-4848-b27d-5419aa484d18] Running
	I1119 02:43:08.095455  291163 system_pods.go:61] "kube-apiserver-old-k8s-version-987573" [8da95707-6a07-4f17-a00c-63ec9cbe294d] Running
	I1119 02:43:08.095461  291163 system_pods.go:61] "kube-controller-manager-old-k8s-version-987573" [ea4139c7-6ced-429f-9eb9-9b2897fd679e] Running
	I1119 02:43:08.095466  291163 system_pods.go:61] "kube-proxy-tmqhk" [ef6bd301-05f1-4196-99a7-73e8ff59dc4b] Running
	I1119 02:43:08.095471  291163 system_pods.go:61] "kube-scheduler-old-k8s-version-987573" [85535e86-1e2a-4131-baec-893b97ce32e6] Running
	I1119 02:43:08.095478  291163 system_pods.go:61] "storage-provisioner" [abe94ba2-07c5-4f03-ab28-00ea277fdc56] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 02:43:08.095487  291163 system_pods.go:74] duration metric: took 3.527954ms to wait for pod list to return data ...
	I1119 02:43:08.095497  291163 default_sa.go:34] waiting for default service account to be created ...
	I1119 02:43:08.097407  291163 default_sa.go:45] found service account: "default"
	I1119 02:43:08.097424  291163 default_sa.go:55] duration metric: took 1.918195ms for default service account to be created ...
	I1119 02:43:08.097462  291163 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 02:43:08.100635  291163 system_pods.go:86] 8 kube-system pods found
	I1119 02:43:08.100659  291163 system_pods.go:89] "coredns-5dd5756b68-djd8r" [38b8c793-304e-42c1-b2a0-ecd1032a5962] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:43:08.100665  291163 system_pods.go:89] "etcd-old-k8s-version-987573" [f7ae9d5c-07c4-402e-97ea-a50375ebc9c3] Running
	I1119 02:43:08.100671  291163 system_pods.go:89] "kindnet-57t4v" [0db2f280-bd80-4848-b27d-5419aa484d18] Running
	I1119 02:43:08.100675  291163 system_pods.go:89] "kube-apiserver-old-k8s-version-987573" [8da95707-6a07-4f17-a00c-63ec9cbe294d] Running
	I1119 02:43:08.100681  291163 system_pods.go:89] "kube-controller-manager-old-k8s-version-987573" [ea4139c7-6ced-429f-9eb9-9b2897fd679e] Running
	I1119 02:43:08.100686  291163 system_pods.go:89] "kube-proxy-tmqhk" [ef6bd301-05f1-4196-99a7-73e8ff59dc4b] Running
	I1119 02:43:08.100696  291163 system_pods.go:89] "kube-scheduler-old-k8s-version-987573" [85535e86-1e2a-4131-baec-893b97ce32e6] Running
	I1119 02:43:08.100704  291163 system_pods.go:89] "storage-provisioner" [abe94ba2-07c5-4f03-ab28-00ea277fdc56] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 02:43:08.100731  291163 retry.go:31] will retry after 255.615466ms: missing components: kube-dns
	I1119 02:43:08.360951  291163 system_pods.go:86] 8 kube-system pods found
	I1119 02:43:08.360990  291163 system_pods.go:89] "coredns-5dd5756b68-djd8r" [38b8c793-304e-42c1-b2a0-ecd1032a5962] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:43:08.360999  291163 system_pods.go:89] "etcd-old-k8s-version-987573" [f7ae9d5c-07c4-402e-97ea-a50375ebc9c3] Running
	I1119 02:43:08.361007  291163 system_pods.go:89] "kindnet-57t4v" [0db2f280-bd80-4848-b27d-5419aa484d18] Running
	I1119 02:43:08.361012  291163 system_pods.go:89] "kube-apiserver-old-k8s-version-987573" [8da95707-6a07-4f17-a00c-63ec9cbe294d] Running
	I1119 02:43:08.361017  291163 system_pods.go:89] "kube-controller-manager-old-k8s-version-987573" [ea4139c7-6ced-429f-9eb9-9b2897fd679e] Running
	I1119 02:43:08.361022  291163 system_pods.go:89] "kube-proxy-tmqhk" [ef6bd301-05f1-4196-99a7-73e8ff59dc4b] Running
	I1119 02:43:08.361027  291163 system_pods.go:89] "kube-scheduler-old-k8s-version-987573" [85535e86-1e2a-4131-baec-893b97ce32e6] Running
	I1119 02:43:08.361034  291163 system_pods.go:89] "storage-provisioner" [abe94ba2-07c5-4f03-ab28-00ea277fdc56] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 02:43:08.361058  291163 retry.go:31] will retry after 283.051609ms: missing components: kube-dns
	I1119 02:43:08.649105  291163 system_pods.go:86] 8 kube-system pods found
	I1119 02:43:08.649146  291163 system_pods.go:89] "coredns-5dd5756b68-djd8r" [38b8c793-304e-42c1-b2a0-ecd1032a5962] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:43:08.649155  291163 system_pods.go:89] "etcd-old-k8s-version-987573" [f7ae9d5c-07c4-402e-97ea-a50375ebc9c3] Running
	I1119 02:43:08.649163  291163 system_pods.go:89] "kindnet-57t4v" [0db2f280-bd80-4848-b27d-5419aa484d18] Running
	I1119 02:43:08.649177  291163 system_pods.go:89] "kube-apiserver-old-k8s-version-987573" [8da95707-6a07-4f17-a00c-63ec9cbe294d] Running
	I1119 02:43:08.649183  291163 system_pods.go:89] "kube-controller-manager-old-k8s-version-987573" [ea4139c7-6ced-429f-9eb9-9b2897fd679e] Running
	I1119 02:43:08.649189  291163 system_pods.go:89] "kube-proxy-tmqhk" [ef6bd301-05f1-4196-99a7-73e8ff59dc4b] Running
	I1119 02:43:08.649194  291163 system_pods.go:89] "kube-scheduler-old-k8s-version-987573" [85535e86-1e2a-4131-baec-893b97ce32e6] Running
	I1119 02:43:08.649201  291163 system_pods.go:89] "storage-provisioner" [abe94ba2-07c5-4f03-ab28-00ea277fdc56] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 02:43:08.649222  291163 retry.go:31] will retry after 437.362391ms: missing components: kube-dns
	I1119 02:43:09.091273  291163 system_pods.go:86] 8 kube-system pods found
	I1119 02:43:09.091310  291163 system_pods.go:89] "coredns-5dd5756b68-djd8r" [38b8c793-304e-42c1-b2a0-ecd1032a5962] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:43:09.091322  291163 system_pods.go:89] "etcd-old-k8s-version-987573" [f7ae9d5c-07c4-402e-97ea-a50375ebc9c3] Running
	I1119 02:43:09.091328  291163 system_pods.go:89] "kindnet-57t4v" [0db2f280-bd80-4848-b27d-5419aa484d18] Running
	I1119 02:43:09.091332  291163 system_pods.go:89] "kube-apiserver-old-k8s-version-987573" [8da95707-6a07-4f17-a00c-63ec9cbe294d] Running
	I1119 02:43:09.091336  291163 system_pods.go:89] "kube-controller-manager-old-k8s-version-987573" [ea4139c7-6ced-429f-9eb9-9b2897fd679e] Running
	I1119 02:43:09.091339  291163 system_pods.go:89] "kube-proxy-tmqhk" [ef6bd301-05f1-4196-99a7-73e8ff59dc4b] Running
	I1119 02:43:09.091342  291163 system_pods.go:89] "kube-scheduler-old-k8s-version-987573" [85535e86-1e2a-4131-baec-893b97ce32e6] Running
	I1119 02:43:09.091347  291163 system_pods.go:89] "storage-provisioner" [abe94ba2-07c5-4f03-ab28-00ea277fdc56] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 02:43:09.091360  291163 retry.go:31] will retry after 557.694848ms: missing components: kube-dns
	I1119 02:43:09.654831  291163 system_pods.go:86] 8 kube-system pods found
	I1119 02:43:09.654864  291163 system_pods.go:89] "coredns-5dd5756b68-djd8r" [38b8c793-304e-42c1-b2a0-ecd1032a5962] Running
	I1119 02:43:09.654874  291163 system_pods.go:89] "etcd-old-k8s-version-987573" [f7ae9d5c-07c4-402e-97ea-a50375ebc9c3] Running
	I1119 02:43:09.654880  291163 system_pods.go:89] "kindnet-57t4v" [0db2f280-bd80-4848-b27d-5419aa484d18] Running
	I1119 02:43:09.654887  291163 system_pods.go:89] "kube-apiserver-old-k8s-version-987573" [8da95707-6a07-4f17-a00c-63ec9cbe294d] Running
	I1119 02:43:09.654892  291163 system_pods.go:89] "kube-controller-manager-old-k8s-version-987573" [ea4139c7-6ced-429f-9eb9-9b2897fd679e] Running
	I1119 02:43:09.654897  291163 system_pods.go:89] "kube-proxy-tmqhk" [ef6bd301-05f1-4196-99a7-73e8ff59dc4b] Running
	I1119 02:43:09.654902  291163 system_pods.go:89] "kube-scheduler-old-k8s-version-987573" [85535e86-1e2a-4131-baec-893b97ce32e6] Running
	I1119 02:43:09.654907  291163 system_pods.go:89] "storage-provisioner" [abe94ba2-07c5-4f03-ab28-00ea277fdc56] Running
	I1119 02:43:09.654917  291163 system_pods.go:126] duration metric: took 1.55744718s to wait for k8s-apps to be running ...
	I1119 02:43:09.654931  291163 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 02:43:09.654989  291163 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 02:43:09.668526  291163 system_svc.go:56] duration metric: took 13.587992ms WaitForService to wait for kubelet
	I1119 02:43:09.668557  291163 kubeadm.go:587] duration metric: took 14.614710886s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 02:43:09.668577  291163 node_conditions.go:102] verifying NodePressure condition ...
	I1119 02:43:09.671058  291163 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1119 02:43:09.671080  291163 node_conditions.go:123] node cpu capacity is 8
	I1119 02:43:09.671094  291163 node_conditions.go:105] duration metric: took 2.511044ms to run NodePressure ...
	I1119 02:43:09.671108  291163 start.go:242] waiting for startup goroutines ...
	I1119 02:43:09.671122  291163 start.go:247] waiting for cluster config update ...
	I1119 02:43:09.671138  291163 start.go:256] writing updated cluster config ...
	I1119 02:43:09.671426  291163 ssh_runner.go:195] Run: rm -f paused
	I1119 02:43:09.675339  291163 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 02:43:09.679685  291163 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-djd8r" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:09.683447  291163 pod_ready.go:94] pod "coredns-5dd5756b68-djd8r" is "Ready"
	I1119 02:43:09.683468  291163 pod_ready.go:86] duration metric: took 3.760218ms for pod "coredns-5dd5756b68-djd8r" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:09.686154  291163 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-987573" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:09.690031  291163 pod_ready.go:94] pod "etcd-old-k8s-version-987573" is "Ready"
	I1119 02:43:09.690049  291163 pod_ready.go:86] duration metric: took 3.878026ms for pod "etcd-old-k8s-version-987573" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:09.692504  291163 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-987573" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:09.695894  291163 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-987573" is "Ready"
	I1119 02:43:09.695913  291163 pod_ready.go:86] duration metric: took 3.39096ms for pod "kube-apiserver-old-k8s-version-987573" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:09.700042  291163 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-987573" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:10.080305  291163 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-987573" is "Ready"
	I1119 02:43:10.080330  291163 pod_ready.go:86] duration metric: took 380.2693ms for pod "kube-controller-manager-old-k8s-version-987573" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:10.279834  291163 pod_ready.go:83] waiting for pod "kube-proxy-tmqhk" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:10.679358  291163 pod_ready.go:94] pod "kube-proxy-tmqhk" is "Ready"
	I1119 02:43:10.679390  291163 pod_ready.go:86] duration metric: took 399.530656ms for pod "kube-proxy-tmqhk" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:10.880413  291163 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-987573" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:11.279416  291163 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-987573" is "Ready"
	I1119 02:43:11.279469  291163 pod_ready.go:86] duration metric: took 399.023354ms for pod "kube-scheduler-old-k8s-version-987573" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:11.279484  291163 pod_ready.go:40] duration metric: took 1.604115977s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 02:43:11.320952  291163 start.go:628] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1119 02:43:11.322818  291163 out.go:203] 
	W1119 02:43:11.324015  291163 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1119 02:43:11.325253  291163 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1119 02:43:11.326753  291163 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-987573" cluster and "default" namespace by default
	I1119 02:43:08.503687  306860 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1119 02:43:08.508285  306860 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1119 02:43:08.508302  306860 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1119 02:43:08.523707  306860 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1119 02:43:08.769348  306860 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1119 02:43:08.769426  306860 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:43:08.769484  306860 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-167150 minikube.k8s.io/updated_at=2025_11_19T02_43_08_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=ebd49efe8b7d44f89fc96e21beffcd64dfee8277 minikube.k8s.io/name=default-k8s-diff-port-167150 minikube.k8s.io/primary=true
	I1119 02:43:08.779644  306860 ops.go:34] apiserver oom_adj: -16
	I1119 02:43:08.864308  306860 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:43:09.364395  306860 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:43:09.865330  306860 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:43:10.364616  306860 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:43:10.864703  306860 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:43:11.364553  306860 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:43:11.864420  306860 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:43:12.365307  306860 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:43:12.440810  306860 kubeadm.go:1114] duration metric: took 3.671440647s to wait for elevateKubeSystemPrivileges
	I1119 02:43:12.440859  306860 kubeadm.go:403] duration metric: took 15.532850823s to StartCluster
	I1119 02:43:12.440882  306860 settings.go:142] acquiring lock: {Name:mk39bd61177f2f8808b442f427cccc3032092975 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:43:12.440962  306860 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21924-11126/kubeconfig
	I1119 02:43:12.443128  306860 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/kubeconfig: {Name:mkb0d0ec188d51add3fa67ae624c9c11581068d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:43:12.443390  306860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1119 02:43:12.443402  306860 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 02:43:12.443617  306860 config.go:182] Loaded profile config "default-k8s-diff-port-167150": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:43:12.443467  306860 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 02:43:12.443670  306860 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-167150"
	I1119 02:43:12.443679  306860 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-167150"
	I1119 02:43:12.443697  306860 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-167150"
	I1119 02:43:12.443697  306860 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-167150"
	I1119 02:43:12.443736  306860 host.go:66] Checking if "default-k8s-diff-port-167150" exists ...
	I1119 02:43:12.444076  306860 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-167150 --format={{.State.Status}}
	I1119 02:43:12.444253  306860 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-167150 --format={{.State.Status}}
	I1119 02:43:12.446396  306860 out.go:179] * Verifying Kubernetes components...
	I1119 02:43:12.447600  306860 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:43:12.470366  306860 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 02:43:12.471033  306860 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-167150"
	I1119 02:43:12.471078  306860 host.go:66] Checking if "default-k8s-diff-port-167150" exists ...
	I1119 02:43:12.471574  306860 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-167150 --format={{.State.Status}}
	I1119 02:43:12.472766  306860 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 02:43:12.472818  306860 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 02:43:12.472877  306860 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-167150
	I1119 02:43:12.503314  306860 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/default-k8s-diff-port-167150/id_rsa Username:docker}
	I1119 02:43:12.503591  306860 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 02:43:12.503615  306860 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 02:43:12.503672  306860 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-167150
	I1119 02:43:12.534100  306860 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/default-k8s-diff-port-167150/id_rsa Username:docker}
	I1119 02:43:12.556628  306860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1119 02:43:12.606106  306860 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 02:43:12.623922  306860 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 02:43:12.650781  306860 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 02:43:12.727240  306860 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1119 02:43:12.728708  306860 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-167150" to be "Ready" ...
	I1119 02:43:12.921283  306860 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1119 02:43:10.461847  299668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:43:10.962221  299668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:43:11.462998  299668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:43:11.962639  299668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:43:12.462654  299668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:43:12.962592  299668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:43:13.462281  299668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:43:13.526012  299668 kubeadm.go:1114] duration metric: took 5.131316482s to wait for elevateKubeSystemPrivileges
	I1119 02:43:13.526050  299668 kubeadm.go:403] duration metric: took 17.368991046s to StartCluster
	I1119 02:43:13.526070  299668 settings.go:142] acquiring lock: {Name:mk39bd61177f2f8808b442f427cccc3032092975 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:43:13.526144  299668 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21924-11126/kubeconfig
	I1119 02:43:13.528869  299668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/kubeconfig: {Name:mkb0d0ec188d51add3fa67ae624c9c11581068d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:43:13.529152  299668 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1119 02:43:13.529178  299668 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 02:43:13.529221  299668 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 02:43:13.529318  299668 addons.go:70] Setting storage-provisioner=true in profile "no-preload-837474"
	I1119 02:43:13.529340  299668 addons.go:239] Setting addon storage-provisioner=true in "no-preload-837474"
	I1119 02:43:13.529340  299668 addons.go:70] Setting default-storageclass=true in profile "no-preload-837474"
	I1119 02:43:13.529365  299668 config.go:182] Loaded profile config "no-preload-837474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:43:13.529370  299668 host.go:66] Checking if "no-preload-837474" exists ...
	I1119 02:43:13.529375  299668 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-837474"
	I1119 02:43:13.529859  299668 cli_runner.go:164] Run: docker container inspect no-preload-837474 --format={{.State.Status}}
	I1119 02:43:13.530016  299668 cli_runner.go:164] Run: docker container inspect no-preload-837474 --format={{.State.Status}}
	I1119 02:43:13.530719  299668 out.go:179] * Verifying Kubernetes components...
	I1119 02:43:13.531956  299668 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:43:13.553148  299668 addons.go:239] Setting addon default-storageclass=true in "no-preload-837474"
	I1119 02:43:13.553192  299668 host.go:66] Checking if "no-preload-837474" exists ...
	I1119 02:43:13.553734  299668 cli_runner.go:164] Run: docker container inspect no-preload-837474 --format={{.State.Status}}
	I1119 02:43:13.555218  299668 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 02:43:13.556409  299668 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 02:43:13.556465  299668 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 02:43:13.556515  299668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-837474
	I1119 02:43:13.581067  299668 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 02:43:13.581088  299668 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 02:43:13.581147  299668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-837474
	I1119 02:43:13.587309  299668 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/no-preload-837474/id_rsa Username:docker}
	I1119 02:43:13.603773  299668 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/no-preload-837474/id_rsa Username:docker}
	I1119 02:43:13.616042  299668 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1119 02:43:13.662733  299668 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 02:43:13.696898  299668 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 02:43:13.712155  299668 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 02:43:13.803707  299668 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1119 02:43:13.805528  299668 node_ready.go:35] waiting up to 6m0s for node "no-preload-837474" to be "Ready" ...
	I1119 02:43:14.021090  299668 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1119 02:43:09.657354  302848 node_ready.go:57] node "embed-certs-811173" has "Ready":"False" status (will retry)
	W1119 02:43:12.157245  302848 node_ready.go:57] node "embed-certs-811173" has "Ready":"False" status (will retry)
	I1119 02:43:14.022184  299668 addons.go:515] duration metric: took 492.963117ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1119 02:43:14.308619  299668 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-837474" context rescaled to 1 replicas
	I1119 02:43:12.922563  306860 addons.go:515] duration metric: took 479.097332ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1119 02:43:13.231221  306860 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-167150" context rescaled to 1 replicas
	W1119 02:43:14.732655  306860 node_ready.go:57] node "default-k8s-diff-port-167150" has "Ready":"False" status (will retry)
	W1119 02:43:14.157530  302848 node_ready.go:57] node "embed-certs-811173" has "Ready":"False" status (will retry)
	W1119 02:43:16.157612  302848 node_ready.go:57] node "embed-certs-811173" has "Ready":"False" status (will retry)
	I1119 02:43:18.657467  302848 node_ready.go:49] node "embed-certs-811173" is "Ready"
	I1119 02:43:18.657570  302848 node_ready.go:38] duration metric: took 11.003423276s for node "embed-certs-811173" to be "Ready" ...
	I1119 02:43:18.657596  302848 api_server.go:52] waiting for apiserver process to appear ...
	I1119 02:43:18.657639  302848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 02:43:18.670551  302848 api_server.go:72] duration metric: took 11.375418064s to wait for apiserver process to appear ...
	I1119 02:43:18.670593  302848 api_server.go:88] waiting for apiserver healthz status ...
	I1119 02:43:18.670611  302848 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1119 02:43:18.675195  302848 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1119 02:43:18.676254  302848 api_server.go:141] control plane version: v1.34.1
	I1119 02:43:18.676282  302848 api_server.go:131] duration metric: took 5.680617ms to wait for apiserver health ...
	I1119 02:43:18.676292  302848 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 02:43:18.679796  302848 system_pods.go:59] 8 kube-system pods found
	I1119 02:43:18.679829  302848 system_pods.go:61] "coredns-66bc5c9577-6zqr2" [45763e00-8d07-4cd1-bc77-8131988ad187] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:43:18.679837  302848 system_pods.go:61] "etcd-embed-certs-811173" [aa91bc11-b985-43ed-bb19-226f47adb517] Running
	I1119 02:43:18.679843  302848 system_pods.go:61] "kindnet-b2w9g" [0c0429a0-c37c-4eae-befb-d496610e882c] Running
	I1119 02:43:18.679849  302848 system_pods.go:61] "kube-apiserver-embed-certs-811173" [85c9fc14-94db-4732-ad9c-53fdb27b0bb5] Running
	I1119 02:43:18.679860  302848 system_pods.go:61] "kube-controller-manager-embed-certs-811173" [9944e561-0ab9-496d-baac-8b99bf3d6149] Running
	I1119 02:43:18.679865  302848 system_pods.go:61] "kube-proxy-s5bzz" [cebbac1b-ff7a-4bdf-b337-ec0b3b320728] Running
	I1119 02:43:18.679873  302848 system_pods.go:61] "kube-scheduler-embed-certs-811173" [6c1f1974-3341-47e3-875d-e5ec0abd032c] Running
	I1119 02:43:18.679881  302848 system_pods.go:61] "storage-provisioner" [4b41d056-28d4-4b4a-b546-2fb8c76fe688] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 02:43:18.679892  302848 system_pods.go:74] duration metric: took 3.592078ms to wait for pod list to return data ...
	I1119 02:43:18.679903  302848 default_sa.go:34] waiting for default service account to be created ...
	I1119 02:43:18.682287  302848 default_sa.go:45] found service account: "default"
	I1119 02:43:18.682313  302848 default_sa.go:55] duration metric: took 2.403388ms for default service account to be created ...
	I1119 02:43:18.682323  302848 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 02:43:18.684915  302848 system_pods.go:86] 8 kube-system pods found
	I1119 02:43:18.684945  302848 system_pods.go:89] "coredns-66bc5c9577-6zqr2" [45763e00-8d07-4cd1-bc77-8131988ad187] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:43:18.684954  302848 system_pods.go:89] "etcd-embed-certs-811173" [aa91bc11-b985-43ed-bb19-226f47adb517] Running
	I1119 02:43:18.684965  302848 system_pods.go:89] "kindnet-b2w9g" [0c0429a0-c37c-4eae-befb-d496610e882c] Running
	I1119 02:43:18.684971  302848 system_pods.go:89] "kube-apiserver-embed-certs-811173" [85c9fc14-94db-4732-ad9c-53fdb27b0bb5] Running
	I1119 02:43:18.684980  302848 system_pods.go:89] "kube-controller-manager-embed-certs-811173" [9944e561-0ab9-496d-baac-8b99bf3d6149] Running
	I1119 02:43:18.684986  302848 system_pods.go:89] "kube-proxy-s5bzz" [cebbac1b-ff7a-4bdf-b337-ec0b3b320728] Running
	I1119 02:43:18.684993  302848 system_pods.go:89] "kube-scheduler-embed-certs-811173" [6c1f1974-3341-47e3-875d-e5ec0abd032c] Running
	I1119 02:43:18.685000  302848 system_pods.go:89] "storage-provisioner" [4b41d056-28d4-4b4a-b546-2fb8c76fe688] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 02:43:18.685025  302848 retry.go:31] will retry after 210.702103ms: missing components: kube-dns
	I1119 02:43:18.900340  302848 system_pods.go:86] 8 kube-system pods found
	I1119 02:43:18.900379  302848 system_pods.go:89] "coredns-66bc5c9577-6zqr2" [45763e00-8d07-4cd1-bc77-8131988ad187] Running
	I1119 02:43:18.900388  302848 system_pods.go:89] "etcd-embed-certs-811173" [aa91bc11-b985-43ed-bb19-226f47adb517] Running
	I1119 02:43:18.900394  302848 system_pods.go:89] "kindnet-b2w9g" [0c0429a0-c37c-4eae-befb-d496610e882c] Running
	I1119 02:43:18.900400  302848 system_pods.go:89] "kube-apiserver-embed-certs-811173" [85c9fc14-94db-4732-ad9c-53fdb27b0bb5] Running
	I1119 02:43:18.900410  302848 system_pods.go:89] "kube-controller-manager-embed-certs-811173" [9944e561-0ab9-496d-baac-8b99bf3d6149] Running
	I1119 02:43:18.900415  302848 system_pods.go:89] "kube-proxy-s5bzz" [cebbac1b-ff7a-4bdf-b337-ec0b3b320728] Running
	I1119 02:43:18.900424  302848 system_pods.go:89] "kube-scheduler-embed-certs-811173" [6c1f1974-3341-47e3-875d-e5ec0abd032c] Running
	I1119 02:43:18.900441  302848 system_pods.go:89] "storage-provisioner" [4b41d056-28d4-4b4a-b546-2fb8c76fe688] Running
	I1119 02:43:18.900455  302848 system_pods.go:126] duration metric: took 218.125466ms to wait for k8s-apps to be running ...
	I1119 02:43:18.900467  302848 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 02:43:18.900516  302848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 02:43:18.914258  302848 system_svc.go:56] duration metric: took 13.781732ms WaitForService to wait for kubelet
	I1119 02:43:18.914285  302848 kubeadm.go:587] duration metric: took 11.619154777s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 02:43:18.914308  302848 node_conditions.go:102] verifying NodePressure condition ...
	I1119 02:43:18.917624  302848 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1119 02:43:18.917653  302848 node_conditions.go:123] node cpu capacity is 8
	I1119 02:43:18.917668  302848 node_conditions.go:105] duration metric: took 3.351447ms to run NodePressure ...
	I1119 02:43:18.917682  302848 start.go:242] waiting for startup goroutines ...
	I1119 02:43:18.917691  302848 start.go:247] waiting for cluster config update ...
	I1119 02:43:18.917704  302848 start.go:256] writing updated cluster config ...
	I1119 02:43:18.918010  302848 ssh_runner.go:195] Run: rm -f paused
	I1119 02:43:18.922579  302848 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 02:43:18.927046  302848 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-6zqr2" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:18.932042  302848 pod_ready.go:94] pod "coredns-66bc5c9577-6zqr2" is "Ready"
	I1119 02:43:18.932062  302848 pod_ready.go:86] duration metric: took 4.995305ms for pod "coredns-66bc5c9577-6zqr2" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:18.934146  302848 pod_ready.go:83] waiting for pod "etcd-embed-certs-811173" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:18.938004  302848 pod_ready.go:94] pod "etcd-embed-certs-811173" is "Ready"
	I1119 02:43:18.938027  302848 pod_ready.go:86] duration metric: took 3.859982ms for pod "etcd-embed-certs-811173" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:18.939959  302848 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-811173" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:18.943426  302848 pod_ready.go:94] pod "kube-apiserver-embed-certs-811173" is "Ready"
	I1119 02:43:18.943477  302848 pod_ready.go:86] duration metric: took 3.498122ms for pod "kube-apiserver-embed-certs-811173" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:18.945295  302848 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-811173" in "kube-system" namespace to be "Ready" or be gone ...
	W1119 02:43:15.809292  299668 node_ready.go:57] node "no-preload-837474" has "Ready":"False" status (will retry)
	W1119 02:43:18.308758  299668 node_ready.go:57] node "no-preload-837474" has "Ready":"False" status (will retry)
	I1119 02:43:19.327493  302848 pod_ready.go:94] pod "kube-controller-manager-embed-certs-811173" is "Ready"
	I1119 02:43:19.327522  302848 pod_ready.go:86] duration metric: took 382.207661ms for pod "kube-controller-manager-embed-certs-811173" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:19.527541  302848 pod_ready.go:83] waiting for pod "kube-proxy-s5bzz" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:19.926760  302848 pod_ready.go:94] pod "kube-proxy-s5bzz" is "Ready"
	I1119 02:43:19.926788  302848 pod_ready.go:86] duration metric: took 399.218426ms for pod "kube-proxy-s5bzz" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:20.127073  302848 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-811173" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:20.527220  302848 pod_ready.go:94] pod "kube-scheduler-embed-certs-811173" is "Ready"
	I1119 02:43:20.527245  302848 pod_ready.go:86] duration metric: took 400.150902ms for pod "kube-scheduler-embed-certs-811173" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:20.527257  302848 pod_ready.go:40] duration metric: took 1.604655373s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 02:43:20.574829  302848 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1119 02:43:20.576692  302848 out.go:179] * Done! kubectl is now configured to use "embed-certs-811173" cluster and "default" namespace by default
	W1119 02:43:17.232129  306860 node_ready.go:57] node "default-k8s-diff-port-167150" has "Ready":"False" status (will retry)
	W1119 02:43:19.732303  306860 node_ready.go:57] node "default-k8s-diff-port-167150" has "Ready":"False" status (will retry)
	W1119 02:43:21.732557  306860 node_ready.go:57] node "default-k8s-diff-port-167150" has "Ready":"False" status (will retry)
	W1119 02:43:20.309649  299668 node_ready.go:57] node "no-preload-837474" has "Ready":"False" status (will retry)
	W1119 02:43:22.808418  299668 node_ready.go:57] node "no-preload-837474" has "Ready":"False" status (will retry)
	W1119 02:43:24.809115  299668 node_ready.go:57] node "no-preload-837474" has "Ready":"False" status (will retry)
	W1119 02:43:24.231545  306860 node_ready.go:57] node "default-k8s-diff-port-167150" has "Ready":"False" status (will retry)
	I1119 02:43:24.733458  306860 node_ready.go:49] node "default-k8s-diff-port-167150" is "Ready"
	I1119 02:43:24.733492  306860 node_ready.go:38] duration metric: took 12.004757465s for node "default-k8s-diff-port-167150" to be "Ready" ...
	I1119 02:43:24.733508  306860 api_server.go:52] waiting for apiserver process to appear ...
	I1119 02:43:24.733583  306860 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 02:43:24.752894  306860 api_server.go:72] duration metric: took 12.309451634s to wait for apiserver process to appear ...
	I1119 02:43:24.752923  306860 api_server.go:88] waiting for apiserver healthz status ...
	I1119 02:43:24.752947  306860 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I1119 02:43:24.757341  306860 api_server.go:279] https://192.168.94.2:8444/healthz returned 200:
	ok
	I1119 02:43:24.758286  306860 api_server.go:141] control plane version: v1.34.1
	I1119 02:43:24.758343  306860 api_server.go:131] duration metric: took 5.412493ms to wait for apiserver health ...
	I1119 02:43:24.758360  306860 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 02:43:24.764264  306860 system_pods.go:59] 8 kube-system pods found
	I1119 02:43:24.764302  306860 system_pods.go:61] "coredns-66bc5c9577-bht2q" [67eaa46f-0f14-47fe-b518-8fc2339ac090] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:43:24.764312  306860 system_pods.go:61] "etcd-default-k8s-diff-port-167150" [ac29d08c-2178-4113-8fe6-ea4363113e84] Running
	I1119 02:43:24.764317  306860 system_pods.go:61] "kindnet-rs6jh" [05ae880f-e69c-4513-b3ab-f76b85c4ac98] Running
	I1119 02:43:24.764321  306860 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-167150" [4d716863-a958-4aa4-ac71-8630c57c1676] Running
	I1119 02:43:24.764324  306860 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-167150" [5137eb7b-c71f-43be-a32f-908e744cb6c5] Running
	I1119 02:43:24.764328  306860 system_pods.go:61] "kube-proxy-8gl4n" [33cee4c4-dbb5-4bc2-becb-ef2654e266b0] Running
	I1119 02:43:24.764331  306860 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-167150" [abe76fdb-41ae-498d-93bd-05734e7bdc8a] Running
	I1119 02:43:24.764335  306860 system_pods.go:61] "storage-provisioner" [03ff5a52-b9d1-454f-ab4c-ca75268b32ef] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 02:43:24.764341  306860 system_pods.go:74] duration metric: took 5.975017ms to wait for pod list to return data ...
	I1119 02:43:24.764348  306860 default_sa.go:34] waiting for default service account to be created ...
	I1119 02:43:24.766502  306860 default_sa.go:45] found service account: "default"
	I1119 02:43:24.766524  306860 default_sa.go:55] duration metric: took 2.165771ms for default service account to be created ...
	I1119 02:43:24.766533  306860 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 02:43:24.865373  306860 system_pods.go:86] 8 kube-system pods found
	I1119 02:43:24.865426  306860 system_pods.go:89] "coredns-66bc5c9577-bht2q" [67eaa46f-0f14-47fe-b518-8fc2339ac090] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:43:24.865447  306860 system_pods.go:89] "etcd-default-k8s-diff-port-167150" [ac29d08c-2178-4113-8fe6-ea4363113e84] Running
	I1119 02:43:24.865457  306860 system_pods.go:89] "kindnet-rs6jh" [05ae880f-e69c-4513-b3ab-f76b85c4ac98] Running
	I1119 02:43:24.865479  306860 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-167150" [4d716863-a958-4aa4-ac71-8630c57c1676] Running
	I1119 02:43:24.865489  306860 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-167150" [5137eb7b-c71f-43be-a32f-908e744cb6c5] Running
	I1119 02:43:24.865495  306860 system_pods.go:89] "kube-proxy-8gl4n" [33cee4c4-dbb5-4bc2-becb-ef2654e266b0] Running
	I1119 02:43:24.865505  306860 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-167150" [abe76fdb-41ae-498d-93bd-05734e7bdc8a] Running
	I1119 02:43:24.865519  306860 system_pods.go:89] "storage-provisioner" [03ff5a52-b9d1-454f-ab4c-ca75268b32ef] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 02:43:24.865542  306860 retry.go:31] will retry after 194.79473ms: missing components: kube-dns
	I1119 02:43:25.064190  306860 system_pods.go:86] 8 kube-system pods found
	I1119 02:43:25.064221  306860 system_pods.go:89] "coredns-66bc5c9577-bht2q" [67eaa46f-0f14-47fe-b518-8fc2339ac090] Running
	I1119 02:43:25.064227  306860 system_pods.go:89] "etcd-default-k8s-diff-port-167150" [ac29d08c-2178-4113-8fe6-ea4363113e84] Running
	I1119 02:43:25.064232  306860 system_pods.go:89] "kindnet-rs6jh" [05ae880f-e69c-4513-b3ab-f76b85c4ac98] Running
	I1119 02:43:25.064235  306860 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-167150" [4d716863-a958-4aa4-ac71-8630c57c1676] Running
	I1119 02:43:25.064239  306860 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-167150" [5137eb7b-c71f-43be-a32f-908e744cb6c5] Running
	I1119 02:43:25.064242  306860 system_pods.go:89] "kube-proxy-8gl4n" [33cee4c4-dbb5-4bc2-becb-ef2654e266b0] Running
	I1119 02:43:25.064246  306860 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-167150" [abe76fdb-41ae-498d-93bd-05734e7bdc8a] Running
	I1119 02:43:25.064250  306860 system_pods.go:89] "storage-provisioner" [03ff5a52-b9d1-454f-ab4c-ca75268b32ef] Running
	I1119 02:43:25.064257  306860 system_pods.go:126] duration metric: took 297.719432ms to wait for k8s-apps to be running ...
	I1119 02:43:25.064266  306860 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 02:43:25.064303  306860 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 02:43:25.076907  306860 system_svc.go:56] duration metric: took 12.630218ms WaitForService to wait for kubelet
	I1119 02:43:25.076935  306860 kubeadm.go:587] duration metric: took 12.633502759s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 02:43:25.076960  306860 node_conditions.go:102] verifying NodePressure condition ...
	I1119 02:43:25.079481  306860 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1119 02:43:25.079502  306860 node_conditions.go:123] node cpu capacity is 8
	I1119 02:43:25.079515  306860 node_conditions.go:105] duration metric: took 2.549912ms to run NodePressure ...
	I1119 02:43:25.079525  306860 start.go:242] waiting for startup goroutines ...
	I1119 02:43:25.079531  306860 start.go:247] waiting for cluster config update ...
	I1119 02:43:25.079541  306860 start.go:256] writing updated cluster config ...
	I1119 02:43:25.079785  306860 ssh_runner.go:195] Run: rm -f paused
	I1119 02:43:25.083850  306860 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 02:43:25.087017  306860 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-bht2q" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:25.090686  306860 pod_ready.go:94] pod "coredns-66bc5c9577-bht2q" is "Ready"
	I1119 02:43:25.090707  306860 pod_ready.go:86] duration metric: took 3.667578ms for pod "coredns-66bc5c9577-bht2q" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:25.092373  306860 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-167150" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:25.095652  306860 pod_ready.go:94] pod "etcd-default-k8s-diff-port-167150" is "Ready"
	I1119 02:43:25.095668  306860 pod_ready.go:86] duration metric: took 3.276898ms for pod "etcd-default-k8s-diff-port-167150" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:25.097376  306860 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-167150" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:25.100915  306860 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-167150" is "Ready"
	I1119 02:43:25.100937  306860 pod_ready.go:86] duration metric: took 3.543197ms for pod "kube-apiserver-default-k8s-diff-port-167150" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:25.102634  306860 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-167150" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:25.487998  306860 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-167150" is "Ready"
	I1119 02:43:25.488025  306860 pod_ready.go:86] duration metric: took 385.369921ms for pod "kube-controller-manager-default-k8s-diff-port-167150" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:25.687758  306860 pod_ready.go:83] waiting for pod "kube-proxy-8gl4n" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:26.088008  306860 pod_ready.go:94] pod "kube-proxy-8gl4n" is "Ready"
	I1119 02:43:26.088034  306860 pod_ready.go:86] duration metric: took 400.250445ms for pod "kube-proxy-8gl4n" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:26.288481  306860 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-167150" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:26.688376  306860 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-167150" is "Ready"
	I1119 02:43:26.688410  306860 pod_ready.go:86] duration metric: took 399.899992ms for pod "kube-scheduler-default-k8s-diff-port-167150" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:26.688424  306860 pod_ready.go:40] duration metric: took 1.604546321s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 02:43:26.732044  306860 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1119 02:43:26.733844  306860 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-167150" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 19 02:43:18 embed-certs-811173 crio[770]: time="2025-11-19T02:43:18.541572777Z" level=info msg="Starting container: 0ff4435f74be156fc739ceda3f563ffeb87e0dba26fcae7fcb43addcc263bcdb" id=2f3702a3-0ee9-4d09-8a62-c43f7ef91ab6 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 02:43:18 embed-certs-811173 crio[770]: time="2025-11-19T02:43:18.543444476Z" level=info msg="Started container" PID=1841 containerID=0ff4435f74be156fc739ceda3f563ffeb87e0dba26fcae7fcb43addcc263bcdb description=kube-system/coredns-66bc5c9577-6zqr2/coredns id=2f3702a3-0ee9-4d09-8a62-c43f7ef91ab6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c22103e45f43f3a6492430fb18a46495979f03969579736cf112d0a91786ae89
	Nov 19 02:43:21 embed-certs-811173 crio[770]: time="2025-11-19T02:43:21.04710006Z" level=info msg="Running pod sandbox: default/busybox/POD" id=cef2acc9-84ad-49ef-8044-1adadc461464 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 02:43:21 embed-certs-811173 crio[770]: time="2025-11-19T02:43:21.047160443Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:43:21 embed-certs-811173 crio[770]: time="2025-11-19T02:43:21.052798908Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:35f64b86eace427bd5ae5e701593416df8175fdad0aec83db36f3c2dd65afcc3 UID:e73ec6be-f0d4-46e6-8113-18b6d64163b1 NetNS:/var/run/netns/bf466e4a-9834-4162-ae02-c496abd3b85d Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008a5f0}] Aliases:map[]}"
	Nov 19 02:43:21 embed-certs-811173 crio[770]: time="2025-11-19T02:43:21.052837526Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 19 02:43:21 embed-certs-811173 crio[770]: time="2025-11-19T02:43:21.062697539Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:35f64b86eace427bd5ae5e701593416df8175fdad0aec83db36f3c2dd65afcc3 UID:e73ec6be-f0d4-46e6-8113-18b6d64163b1 NetNS:/var/run/netns/bf466e4a-9834-4162-ae02-c496abd3b85d Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008a5f0}] Aliases:map[]}"
	Nov 19 02:43:21 embed-certs-811173 crio[770]: time="2025-11-19T02:43:21.062872657Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 19 02:43:21 embed-certs-811173 crio[770]: time="2025-11-19T02:43:21.063602994Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 19 02:43:21 embed-certs-811173 crio[770]: time="2025-11-19T02:43:21.064418938Z" level=info msg="Ran pod sandbox 35f64b86eace427bd5ae5e701593416df8175fdad0aec83db36f3c2dd65afcc3 with infra container: default/busybox/POD" id=cef2acc9-84ad-49ef-8044-1adadc461464 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 02:43:21 embed-certs-811173 crio[770]: time="2025-11-19T02:43:21.065448655Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=dd4b6630-232e-41b5-b89b-c32f4c266349 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 02:43:21 embed-certs-811173 crio[770]: time="2025-11-19T02:43:21.065585662Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=dd4b6630-232e-41b5-b89b-c32f4c266349 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 02:43:21 embed-certs-811173 crio[770]: time="2025-11-19T02:43:21.06563166Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=dd4b6630-232e-41b5-b89b-c32f4c266349 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 02:43:21 embed-certs-811173 crio[770]: time="2025-11-19T02:43:21.066318227Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=df8d68b6-e492-4796-9ffa-02347c53e718 name=/runtime.v1.ImageService/PullImage
	Nov 19 02:43:21 embed-certs-811173 crio[770]: time="2025-11-19T02:43:21.067926527Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 19 02:43:21 embed-certs-811173 crio[770]: time="2025-11-19T02:43:21.886587209Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=df8d68b6-e492-4796-9ffa-02347c53e718 name=/runtime.v1.ImageService/PullImage
	Nov 19 02:43:21 embed-certs-811173 crio[770]: time="2025-11-19T02:43:21.887239149Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=1d758774-82ea-4fee-a614-0507f8d8fbbb name=/runtime.v1.ImageService/ImageStatus
	Nov 19 02:43:21 embed-certs-811173 crio[770]: time="2025-11-19T02:43:21.888606415Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=7d6be326-8ee3-4e18-ae71-b85ac038b162 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 02:43:21 embed-certs-811173 crio[770]: time="2025-11-19T02:43:21.89169156Z" level=info msg="Creating container: default/busybox/busybox" id=ac85f6e0-f4d6-4af8-8c2a-196ad2b64ce5 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 02:43:21 embed-certs-811173 crio[770]: time="2025-11-19T02:43:21.891790911Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:43:21 embed-certs-811173 crio[770]: time="2025-11-19T02:43:21.895332824Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:43:21 embed-certs-811173 crio[770]: time="2025-11-19T02:43:21.895724518Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:43:21 embed-certs-811173 crio[770]: time="2025-11-19T02:43:21.924594863Z" level=info msg="Created container 186c04249719e432ce70d1ddcfb28c786f80130bc55bd2337cd9ef98b77edc34: default/busybox/busybox" id=ac85f6e0-f4d6-4af8-8c2a-196ad2b64ce5 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 02:43:21 embed-certs-811173 crio[770]: time="2025-11-19T02:43:21.925115899Z" level=info msg="Starting container: 186c04249719e432ce70d1ddcfb28c786f80130bc55bd2337cd9ef98b77edc34" id=4977aa24-c1fb-4332-880f-9c17fdfed181 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 02:43:21 embed-certs-811173 crio[770]: time="2025-11-19T02:43:21.926762827Z" level=info msg="Started container" PID=1922 containerID=186c04249719e432ce70d1ddcfb28c786f80130bc55bd2337cd9ef98b77edc34 description=default/busybox/busybox id=4977aa24-c1fb-4332-880f-9c17fdfed181 name=/runtime.v1.RuntimeService/StartContainer sandboxID=35f64b86eace427bd5ae5e701593416df8175fdad0aec83db36f3c2dd65afcc3
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	186c04249719e       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   35f64b86eace4       busybox                                      default
	0ff4435f74be1       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      11 seconds ago      Running             coredns                   0                   c22103e45f43f       coredns-66bc5c9577-6zqr2                     kube-system
	6259046ab3176       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 seconds ago      Running             storage-provisioner       0                   0e601bfa3c593       storage-provisioner                          kube-system
	b21b8648aaebc       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      22 seconds ago      Running             kindnet-cni               0                   64fa5d5b75d59       kindnet-b2w9g                                kube-system
	9e454c351647e       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      22 seconds ago      Running             kube-proxy                0                   f45c8074f300e       kube-proxy-s5bzz                             kube-system
	ecc42578853bb       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      32 seconds ago      Running             etcd                      0                   1022edbddc301       etcd-embed-certs-811173                      kube-system
	1b7b2b74c2e72       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      32 seconds ago      Running             kube-scheduler            0                   971442aabc1ae       kube-scheduler-embed-certs-811173            kube-system
	f1c6e3421bd2e       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      32 seconds ago      Running             kube-controller-manager   0                   e0ca5b476f53b       kube-controller-manager-embed-certs-811173   kube-system
	a9028199c53c0       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      32 seconds ago      Running             kube-apiserver            0                   f0b81690ca1af       kube-apiserver-embed-certs-811173            kube-system
	
	
	==> coredns [0ff4435f74be156fc739ceda3f563ffeb87e0dba26fcae7fcb43addcc263bcdb] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49891 - 24625 "HINFO IN 857999678845060164.164351920838290463. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.116917573s
	
	
	==> describe nodes <==
	Name:               embed-certs-811173
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-811173
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ebd49efe8b7d44f89fc96e21beffcd64dfee8277
	                    minikube.k8s.io/name=embed-certs-811173
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T02_43_02_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 02:42:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-811173
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 02:43:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 02:43:18 +0000   Wed, 19 Nov 2025 02:42:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 02:43:18 +0000   Wed, 19 Nov 2025 02:42:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 02:43:18 +0000   Wed, 19 Nov 2025 02:42:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 02:43:18 +0000   Wed, 19 Nov 2025 02:43:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-811173
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863340Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863340Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf10fb2f940d419c1d138723691cfee8
	  System UUID:                c32197b9-e1d7-4c8f-bcdd-84def1c02350
	  Boot ID:                    b74e7f3d-91af-4c1b-82f6-1745dfd2e678
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-6zqr2                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     23s
	  kube-system                 etcd-embed-certs-811173                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         29s
	  kube-system                 kindnet-b2w9g                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      24s
	  kube-system                 kube-apiserver-embed-certs-811173             250m (3%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-controller-manager-embed-certs-811173    200m (2%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-proxy-s5bzz                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	  kube-system                 kube-scheduler-embed-certs-811173             100m (1%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 22s                kube-proxy       
	  Normal  NodeHasSufficientMemory  33s (x8 over 33s)  kubelet          Node embed-certs-811173 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    33s (x8 over 33s)  kubelet          Node embed-certs-811173 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     33s (x8 over 33s)  kubelet          Node embed-certs-811173 status is now: NodeHasSufficientPID
	  Normal  Starting                 29s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  29s                kubelet          Node embed-certs-811173 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29s                kubelet          Node embed-certs-811173 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29s                kubelet          Node embed-certs-811173 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           24s                node-controller  Node embed-certs-811173 event: Registered Node embed-certs-811173 in Controller
	  Normal  NodeReady                12s                kubelet          Node embed-certs-811173 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: c6 2a 10 38 98 ce 7e af 25 93 50 8b 08 00
	[Nov19 02:40] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 69 19 13 d2 34 08 06
	[  +0.000303] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff de 82 c7 57 ef 49 08 06
	[Nov19 02:41] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 82 b6 eb cd 7e a7 08 06
	[  +0.001170] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 6a 20 a4 3b 82 10 08 06
	[ +12.842438] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 1e 35 97 65 5e 2e 08 06
	[  +4.187285] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ba 20 d4 3c 25 0c 08 06
	[ +19.742639] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6e e8 d1 08 45 d2 08 06
	[  +0.000327] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ba 20 d4 3c 25 0c 08 06
	[Nov19 02:42] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 2b 58 8a 05 dc 08 06
	[  +0.000340] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 b6 eb cd 7e a7 08 06
	[ +10.661146] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b6 1d bb 8d c6 48 08 06
	[  +0.000382] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 1e 35 97 65 5e 2e 08 06
	
	
	==> etcd [ecc42578853bb9bceb84154f2919de37f8c1f230e6930579d38db8568f81574b] <==
	{"level":"warn","ts":"2025-11-19T02:42:58.746042Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:42:58.752552Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:42:58.766712Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:42:58.775059Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:42:58.781340Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:42:58.796100Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:42:58.803150Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:42:58.809428Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:42:58.816143Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:42:58.828520Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:42:58.853873Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:42:58.860957Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:42:58.867251Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:42:58.874046Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50988","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:42:58.880083Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:42:58.886709Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:42:58.893296Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:42:58.902286Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:42:58.908579Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51098","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:42:58.914923Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:42:58.921970Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:42:58.934629Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:42:58.941691Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:42:58.948653Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:42:58.996167Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51196","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 02:43:30 up  1:25,  0 user,  load average: 4.24, 3.35, 2.20
	Linux embed-certs-811173 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b21b8648aaebca1ac76f38444ca7f4549f78a2f596ce746730e9322753920600] <==
	I1119 02:43:07.499306       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 02:43:07.499888       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1119 02:43:07.500134       1 main.go:148] setting mtu 1500 for CNI 
	I1119 02:43:07.500199       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 02:43:07.500249       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T02:43:07Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 02:43:07.798988       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 02:43:07.799063       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 02:43:07.799080       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 02:43:07.799229       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1119 02:43:08.014247       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1119 02:43:08.014285       1 metrics.go:72] Registering metrics
	I1119 02:43:08.014384       1 controller.go:711] "Syncing nftables rules"
	I1119 02:43:17.717515       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1119 02:43:17.717589       1 main.go:301] handling current node
	I1119 02:43:27.720647       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1119 02:43:27.720679       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a9028199c53c05c75bbe5e83d72cf1bfc4e5993371a526a4eba9cc8e0c058074] <==
	E1119 02:42:59.637562       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1119 02:42:59.643204       1 controller.go:667] quota admission added evaluator for: namespaces
	I1119 02:42:59.645116       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1119 02:42:59.645186       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 02:42:59.648682       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 02:42:59.649754       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1119 02:42:59.840914       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 02:43:00.445827       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1119 02:43:00.449464       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1119 02:43:00.449482       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 02:43:00.899097       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1119 02:43:00.932457       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1119 02:43:01.053244       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1119 02:43:01.059166       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1119 02:43:01.060172       1 controller.go:667] quota admission added evaluator for: endpoints
	I1119 02:43:01.063941       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1119 02:43:01.071150       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1119 02:43:01.808751       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1119 02:43:01.819468       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1119 02:43:01.827450       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1119 02:43:06.727020       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 02:43:06.731818       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 02:43:06.872741       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1119 02:43:07.175639       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1119 02:43:28.827990       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:37624: use of closed network connection
	
	
	==> kube-controller-manager [f1c6e3421bd2e4b953bb7a2935e1e623f8e77239b371f8f4c6e2769bc668ba70] <==
	I1119 02:43:06.072051       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1119 02:43:06.072087       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1119 02:43:06.072122       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1119 02:43:06.072188       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-811173"
	I1119 02:43:06.072241       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1119 02:43:06.072489       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1119 02:43:06.072664       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1119 02:43:06.072740       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1119 02:43:06.073859       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1119 02:43:06.075065       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1119 02:43:06.076537       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1119 02:43:06.076601       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1119 02:43:06.076624       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1119 02:43:06.076630       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1119 02:43:06.076634       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1119 02:43:06.078787       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 02:43:06.081928       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1119 02:43:06.082042       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1119 02:43:06.082894       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-811173" podCIDRs=["10.244.0.0/24"]
	I1119 02:43:06.092965       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 02:43:06.109530       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 02:43:06.114709       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 02:43:06.114728       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1119 02:43:06.114755       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1119 02:43:21.073308       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [9e454c351647e441023ee0ae76398162382b29bf29b8a95077f5c4c365147696] <==
	I1119 02:43:07.329139       1 server_linux.go:53] "Using iptables proxy"
	I1119 02:43:07.411342       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 02:43:07.512553       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 02:43:07.512607       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1119 02:43:07.512711       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 02:43:07.552750       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 02:43:07.553488       1 server_linux.go:132] "Using iptables Proxier"
	I1119 02:43:07.566879       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 02:43:07.567879       1 server.go:527] "Version info" version="v1.34.1"
	I1119 02:43:07.567964       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 02:43:07.570341       1 config.go:200] "Starting service config controller"
	I1119 02:43:07.570488       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 02:43:07.570523       1 config.go:106] "Starting endpoint slice config controller"
	I1119 02:43:07.570530       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 02:43:07.570637       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 02:43:07.570687       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 02:43:07.572394       1 config.go:309] "Starting node config controller"
	I1119 02:43:07.572463       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 02:43:07.572473       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 02:43:07.670955       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1119 02:43:07.670981       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1119 02:43:07.670959       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [1b7b2b74c2e7280592a11ca5ab71f4f4aa566349b0854d59a5c1b52419a7fc4e] <==
	E1119 02:42:59.506716       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1119 02:42:59.507031       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1119 02:42:59.507721       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1119 02:42:59.507789       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1119 02:42:59.507850       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1119 02:42:59.507935       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1119 02:42:59.508003       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1119 02:42:59.508484       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1119 02:42:59.508519       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1119 02:42:59.508574       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1119 02:42:59.508640       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1119 02:42:59.508644       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1119 02:42:59.509332       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1119 02:42:59.509959       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1119 02:42:59.510043       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1119 02:43:00.397390       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1119 02:43:00.437864       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1119 02:43:00.442002       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1119 02:43:00.475126       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1119 02:43:00.528016       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1119 02:43:00.597186       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1119 02:43:00.600328       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1119 02:43:00.619529       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1119 02:43:00.641608       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	I1119 02:43:03.803393       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 19 02:43:02 embed-certs-811173 kubelet[1313]: E1119 02:43:02.721769    1313 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-embed-certs-811173\" already exists" pod="kube-system/kube-apiserver-embed-certs-811173"
	Nov 19 02:43:02 embed-certs-811173 kubelet[1313]: I1119 02:43:02.742056    1313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-embed-certs-811173" podStartSLOduration=1.7420346439999999 podStartE2EDuration="1.742034644s" podCreationTimestamp="2025-11-19 02:43:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 02:43:02.734064157 +0000 UTC m=+1.135872040" watchObservedRunningTime="2025-11-19 02:43:02.742034644 +0000 UTC m=+1.143842519"
	Nov 19 02:43:02 embed-certs-811173 kubelet[1313]: I1119 02:43:02.753029    1313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-embed-certs-811173" podStartSLOduration=1.753012018 podStartE2EDuration="1.753012018s" podCreationTimestamp="2025-11-19 02:43:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 02:43:02.742186421 +0000 UTC m=+1.143994305" watchObservedRunningTime="2025-11-19 02:43:02.753012018 +0000 UTC m=+1.154819901"
	Nov 19 02:43:02 embed-certs-811173 kubelet[1313]: I1119 02:43:02.753155    1313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-embed-certs-811173" podStartSLOduration=1.753146103 podStartE2EDuration="1.753146103s" podCreationTimestamp="2025-11-19 02:43:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 02:43:02.752936593 +0000 UTC m=+1.154744477" watchObservedRunningTime="2025-11-19 02:43:02.753146103 +0000 UTC m=+1.154953986"
	Nov 19 02:43:02 embed-certs-811173 kubelet[1313]: I1119 02:43:02.774810    1313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-embed-certs-811173" podStartSLOduration=1.774781323 podStartE2EDuration="1.774781323s" podCreationTimestamp="2025-11-19 02:43:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 02:43:02.763897223 +0000 UTC m=+1.165705106" watchObservedRunningTime="2025-11-19 02:43:02.774781323 +0000 UTC m=+1.176589189"
	Nov 19 02:43:06 embed-certs-811173 kubelet[1313]: I1119 02:43:06.123274    1313 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 19 02:43:06 embed-certs-811173 kubelet[1313]: I1119 02:43:06.124055    1313 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 19 02:43:06 embed-certs-811173 kubelet[1313]: I1119 02:43:06.914885    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/0c0429a0-c37c-4eae-befb-d496610e882c-cni-cfg\") pod \"kindnet-b2w9g\" (UID: \"0c0429a0-c37c-4eae-befb-d496610e882c\") " pod="kube-system/kindnet-b2w9g"
	Nov 19 02:43:06 embed-certs-811173 kubelet[1313]: I1119 02:43:06.914929    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p876s\" (UniqueName: \"kubernetes.io/projected/0c0429a0-c37c-4eae-befb-d496610e882c-kube-api-access-p876s\") pod \"kindnet-b2w9g\" (UID: \"0c0429a0-c37c-4eae-befb-d496610e882c\") " pod="kube-system/kindnet-b2w9g"
	Nov 19 02:43:06 embed-certs-811173 kubelet[1313]: I1119 02:43:06.914956    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cebbac1b-ff7a-4bdf-b337-ec0b3b320728-kube-proxy\") pod \"kube-proxy-s5bzz\" (UID: \"cebbac1b-ff7a-4bdf-b337-ec0b3b320728\") " pod="kube-system/kube-proxy-s5bzz"
	Nov 19 02:43:06 embed-certs-811173 kubelet[1313]: I1119 02:43:06.914994    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cebbac1b-ff7a-4bdf-b337-ec0b3b320728-lib-modules\") pod \"kube-proxy-s5bzz\" (UID: \"cebbac1b-ff7a-4bdf-b337-ec0b3b320728\") " pod="kube-system/kube-proxy-s5bzz"
	Nov 19 02:43:06 embed-certs-811173 kubelet[1313]: I1119 02:43:06.915037    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0c0429a0-c37c-4eae-befb-d496610e882c-lib-modules\") pod \"kindnet-b2w9g\" (UID: \"0c0429a0-c37c-4eae-befb-d496610e882c\") " pod="kube-system/kindnet-b2w9g"
	Nov 19 02:43:06 embed-certs-811173 kubelet[1313]: I1119 02:43:06.915107    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cebbac1b-ff7a-4bdf-b337-ec0b3b320728-xtables-lock\") pod \"kube-proxy-s5bzz\" (UID: \"cebbac1b-ff7a-4bdf-b337-ec0b3b320728\") " pod="kube-system/kube-proxy-s5bzz"
	Nov 19 02:43:06 embed-certs-811173 kubelet[1313]: I1119 02:43:06.915134    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqr7r\" (UniqueName: \"kubernetes.io/projected/cebbac1b-ff7a-4bdf-b337-ec0b3b320728-kube-api-access-gqr7r\") pod \"kube-proxy-s5bzz\" (UID: \"cebbac1b-ff7a-4bdf-b337-ec0b3b320728\") " pod="kube-system/kube-proxy-s5bzz"
	Nov 19 02:43:06 embed-certs-811173 kubelet[1313]: I1119 02:43:06.915161    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0c0429a0-c37c-4eae-befb-d496610e882c-xtables-lock\") pod \"kindnet-b2w9g\" (UID: \"0c0429a0-c37c-4eae-befb-d496610e882c\") " pod="kube-system/kindnet-b2w9g"
	Nov 19 02:43:07 embed-certs-811173 kubelet[1313]: I1119 02:43:07.743244    1313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-b2w9g" podStartSLOduration=1.7432172430000001 podStartE2EDuration="1.743217243s" podCreationTimestamp="2025-11-19 02:43:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 02:43:07.74036729 +0000 UTC m=+6.142175173" watchObservedRunningTime="2025-11-19 02:43:07.743217243 +0000 UTC m=+6.145025125"
	Nov 19 02:43:09 embed-certs-811173 kubelet[1313]: I1119 02:43:09.938459    1313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-s5bzz" podStartSLOduration=3.9384204670000003 podStartE2EDuration="3.938420467s" podCreationTimestamp="2025-11-19 02:43:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 02:43:07.754125084 +0000 UTC m=+6.155932967" watchObservedRunningTime="2025-11-19 02:43:09.938420467 +0000 UTC m=+8.340228349"
	Nov 19 02:43:18 embed-certs-811173 kubelet[1313]: I1119 02:43:18.167786    1313 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 19 02:43:18 embed-certs-811173 kubelet[1313]: I1119 02:43:18.296220    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srnsw\" (UniqueName: \"kubernetes.io/projected/4b41d056-28d4-4b4a-b546-2fb8c76fe688-kube-api-access-srnsw\") pod \"storage-provisioner\" (UID: \"4b41d056-28d4-4b4a-b546-2fb8c76fe688\") " pod="kube-system/storage-provisioner"
	Nov 19 02:43:18 embed-certs-811173 kubelet[1313]: I1119 02:43:18.296279    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/45763e00-8d07-4cd1-bc77-8131988ad187-config-volume\") pod \"coredns-66bc5c9577-6zqr2\" (UID: \"45763e00-8d07-4cd1-bc77-8131988ad187\") " pod="kube-system/coredns-66bc5c9577-6zqr2"
	Nov 19 02:43:18 embed-certs-811173 kubelet[1313]: I1119 02:43:18.296309    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/4b41d056-28d4-4b4a-b546-2fb8c76fe688-tmp\") pod \"storage-provisioner\" (UID: \"4b41d056-28d4-4b4a-b546-2fb8c76fe688\") " pod="kube-system/storage-provisioner"
	Nov 19 02:43:18 embed-certs-811173 kubelet[1313]: I1119 02:43:18.296330    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmtrd\" (UniqueName: \"kubernetes.io/projected/45763e00-8d07-4cd1-bc77-8131988ad187-kube-api-access-tmtrd\") pod \"coredns-66bc5c9577-6zqr2\" (UID: \"45763e00-8d07-4cd1-bc77-8131988ad187\") " pod="kube-system/coredns-66bc5c9577-6zqr2"
	Nov 19 02:43:18 embed-certs-811173 kubelet[1313]: I1119 02:43:18.765058    1313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-6zqr2" podStartSLOduration=11.765036102 podStartE2EDuration="11.765036102s" podCreationTimestamp="2025-11-19 02:43:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 02:43:18.764821313 +0000 UTC m=+17.166629243" watchObservedRunningTime="2025-11-19 02:43:18.765036102 +0000 UTC m=+17.166843992"
	Nov 19 02:43:18 embed-certs-811173 kubelet[1313]: I1119 02:43:18.788752    1313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=11.788730052 podStartE2EDuration="11.788730052s" podCreationTimestamp="2025-11-19 02:43:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 02:43:18.788467502 +0000 UTC m=+17.190275386" watchObservedRunningTime="2025-11-19 02:43:18.788730052 +0000 UTC m=+17.190537936"
	Nov 19 02:43:20 embed-certs-811173 kubelet[1313]: I1119 02:43:20.809943    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5fq7\" (UniqueName: \"kubernetes.io/projected/e73ec6be-f0d4-46e6-8113-18b6d64163b1-kube-api-access-m5fq7\") pod \"busybox\" (UID: \"e73ec6be-f0d4-46e6-8113-18b6d64163b1\") " pod="default/busybox"
	
	
	==> storage-provisioner [6259046ab3176550eec0c6f214ab8a49ed8a32e8d969ccef08b067ee5fbe3455] <==
	I1119 02:43:18.548206       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1119 02:43:18.557550       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1119 02:43:18.557650       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1119 02:43:18.559863       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:43:18.564982       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 02:43:18.565105       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1119 02:43:18.565248       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-811173_61926f4b-ebd5-4f7a-bd1f-e2c98783a5fd!
	I1119 02:43:18.565500       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d6ae127c-c859-4ddd-8bc9-6532cea887ea", APIVersion:"v1", ResourceVersion:"443", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-811173_61926f4b-ebd5-4f7a-bd1f-e2c98783a5fd became leader
	W1119 02:43:18.567489       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:43:18.571286       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 02:43:18.666237       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-811173_61926f4b-ebd5-4f7a-bd1f-e2c98783a5fd!
	W1119 02:43:20.575701       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:43:20.582397       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:43:22.585247       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:43:22.589987       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:43:24.592559       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:43:24.597647       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:43:26.601005       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:43:26.604539       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:43:28.607696       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:43:28.611731       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-811173 -n embed-certs-811173
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-811173 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-167150 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-167150 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (229.785674ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:43:34Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-167150 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-167150 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-167150 describe deploy/metrics-server -n kube-system: exit status 1 (54.575669ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-167150 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-167150
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-167150:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "eba2f66817ceb9445f5247c309c2690fcb320f53eb011c42a1ea1cd06c438d62",
	        "Created": "2025-11-19T02:42:49.168084052Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 308294,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T02:42:49.223990929Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a368e3d71517ce17114afb6c9921965419df972dd0e2d32a9973a8946f0910a3",
	        "ResolvConfPath": "/var/lib/docker/containers/eba2f66817ceb9445f5247c309c2690fcb320f53eb011c42a1ea1cd06c438d62/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/eba2f66817ceb9445f5247c309c2690fcb320f53eb011c42a1ea1cd06c438d62/hostname",
	        "HostsPath": "/var/lib/docker/containers/eba2f66817ceb9445f5247c309c2690fcb320f53eb011c42a1ea1cd06c438d62/hosts",
	        "LogPath": "/var/lib/docker/containers/eba2f66817ceb9445f5247c309c2690fcb320f53eb011c42a1ea1cd06c438d62/eba2f66817ceb9445f5247c309c2690fcb320f53eb011c42a1ea1cd06c438d62-json.log",
	        "Name": "/default-k8s-diff-port-167150",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-167150:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-167150",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "eba2f66817ceb9445f5247c309c2690fcb320f53eb011c42a1ea1cd06c438d62",
	                "LowerDir": "/var/lib/docker/overlay2/05015802710c07cf873b6416e0594c96689d6d543f9392019be507a57324d9f4-init/diff:/var/lib/docker/overlay2/10dcbd04dd53463a9d8dc947ef22df9efa1b3a4d07707317c3869fa39d5b22a7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/05015802710c07cf873b6416e0594c96689d6d543f9392019be507a57324d9f4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/05015802710c07cf873b6416e0594c96689d6d543f9392019be507a57324d9f4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/05015802710c07cf873b6416e0594c96689d6d543f9392019be507a57324d9f4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-167150",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-167150/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-167150",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-167150",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-167150",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "29e229ea496a1d0a9c212e32279f2be156c3abb29da386a9f05a63cf32d1fb86",
	            "SandboxKey": "/var/run/docker/netns/29e229ea496a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33107"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33105"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33106"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-167150": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "446e8ca4ab47bdd8748a928cc2372566c0406b83b85d73866a63b7236a1153af",
	                    "EndpointID": "7b1927246c5b1d8861e50981abcd1339f51249ae9cadb944dee2e834ba2d1cb2",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "f6:33:ef:e8:6a:32",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-167150",
	                        "eba2f66817ce"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-167150 -n default-k8s-diff-port-167150
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-167150 logs -n 25
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p bridge-001617 sudo cat /etc/docker/daemon.json                                                                                                                        │ bridge-001617                │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │                     │
	│ ssh     │ -p bridge-001617 sudo docker system info                                                                                                                                 │ bridge-001617                │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │                     │
	│ ssh     │ -p bridge-001617 sudo systemctl status cri-docker --all --full --no-pager                                                                                                │ bridge-001617                │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │                     │
	│ ssh     │ -p bridge-001617 sudo systemctl cat cri-docker --no-pager                                                                                                                │ bridge-001617                │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │ 19 Nov 25 02:42 UTC │
	│ ssh     │ -p bridge-001617 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                           │ bridge-001617                │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │                     │
	│ start   │ -p embed-certs-811173 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                   │ embed-certs-811173           │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │ 19 Nov 25 02:43 UTC │
	│ ssh     │ -p bridge-001617 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                     │ bridge-001617                │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │ 19 Nov 25 02:42 UTC │
	│ ssh     │ -p bridge-001617 sudo cri-dockerd --version                                                                                                                              │ bridge-001617                │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │ 19 Nov 25 02:42 UTC │
	│ ssh     │ -p bridge-001617 sudo systemctl status containerd --all --full --no-pager                                                                                                │ bridge-001617                │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │                     │
	│ ssh     │ -p bridge-001617 sudo systemctl cat containerd --no-pager                                                                                                                │ bridge-001617                │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │ 19 Nov 25 02:42 UTC │
	│ ssh     │ -p bridge-001617 sudo cat /lib/systemd/system/containerd.service                                                                                                         │ bridge-001617                │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │ 19 Nov 25 02:42 UTC │
	│ ssh     │ -p bridge-001617 sudo cat /etc/containerd/config.toml                                                                                                                    │ bridge-001617                │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │ 19 Nov 25 02:42 UTC │
	│ ssh     │ -p bridge-001617 sudo containerd config dump                                                                                                                             │ bridge-001617                │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │ 19 Nov 25 02:42 UTC │
	│ ssh     │ -p bridge-001617 sudo systemctl status crio --all --full --no-pager                                                                                                      │ bridge-001617                │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │ 19 Nov 25 02:42 UTC │
	│ ssh     │ -p bridge-001617 sudo systemctl cat crio --no-pager                                                                                                                      │ bridge-001617                │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │ 19 Nov 25 02:42 UTC │
	│ ssh     │ -p bridge-001617 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                            │ bridge-001617                │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │ 19 Nov 25 02:42 UTC │
	│ ssh     │ -p bridge-001617 sudo crio config                                                                                                                                        │ bridge-001617                │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │ 19 Nov 25 02:42 UTC │
	│ delete  │ -p bridge-001617                                                                                                                                                         │ bridge-001617                │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │ 19 Nov 25 02:42 UTC │
	│ delete  │ -p disable-driver-mounts-682232                                                                                                                                          │ disable-driver-mounts-682232 │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │ 19 Nov 25 02:42 UTC │
	│ start   │ -p default-k8s-diff-port-167150 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ default-k8s-diff-port-167150 │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │ 19 Nov 25 02:43 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-987573 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                             │ old-k8s-version-987573       │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │                     │
	│ stop    │ -p old-k8s-version-987573 --alsologtostderr -v=3                                                                                                                         │ old-k8s-version-987573       │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-811173 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                 │ embed-certs-811173           │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │                     │
	│ stop    │ -p embed-certs-811173 --alsologtostderr -v=3                                                                                                                             │ embed-certs-811173           │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-167150 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                       │ default-k8s-diff-port-167150 │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 02:42:42
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 02:42:42.176241  306860 out.go:360] Setting OutFile to fd 1 ...
	I1119 02:42:42.176542  306860 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:42:42.176552  306860 out.go:374] Setting ErrFile to fd 2...
	I1119 02:42:42.176557  306860 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:42:42.176798  306860 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-11126/.minikube/bin
	I1119 02:42:42.177312  306860 out.go:368] Setting JSON to false
	I1119 02:42:42.178694  306860 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5109,"bootTime":1763515053,"procs":301,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 02:42:42.178817  306860 start.go:143] virtualization: kvm guest
	I1119 02:42:42.181266  306860 out.go:179] * [default-k8s-diff-port-167150] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1119 02:42:42.182506  306860 out.go:179]   - MINIKUBE_LOCATION=21924
	I1119 02:42:42.182508  306860 notify.go:221] Checking for updates...
	I1119 02:42:42.184984  306860 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 02:42:42.186380  306860 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21924-11126/kubeconfig
	I1119 02:42:42.187520  306860 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-11126/.minikube
	I1119 02:42:42.188641  306860 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1119 02:42:42.189749  306860 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 02:42:42.191476  306860 config.go:182] Loaded profile config "embed-certs-811173": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:42:42.191626  306860 config.go:182] Loaded profile config "no-preload-837474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:42:42.191747  306860 config.go:182] Loaded profile config "old-k8s-version-987573": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1119 02:42:42.191879  306860 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 02:42:42.219938  306860 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1119 02:42:42.220096  306860 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 02:42:42.291707  306860 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:81 OomKillDisable:false NGoroutines:93 SystemTime:2025-11-19 02:42:42.280719148 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652060160 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 02:42:42.291851  306860 docker.go:319] overlay module found
	I1119 02:42:42.294039  306860 out.go:179] * Using the docker driver based on user configuration
	I1119 02:42:42.295025  306860 start.go:309] selected driver: docker
	I1119 02:42:42.295045  306860 start.go:930] validating driver "docker" against <nil>
	I1119 02:42:42.295071  306860 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 02:42:42.295643  306860 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 02:42:42.358641  306860 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:81 OomKillDisable:false NGoroutines:93 SystemTime:2025-11-19 02:42:42.347786548 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652060160 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 02:42:42.358876  306860 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1119 02:42:42.359101  306860 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 02:42:42.361283  306860 out.go:179] * Using Docker driver with root privileges
	I1119 02:42:42.362628  306860 cni.go:84] Creating CNI manager for ""
	I1119 02:42:42.362714  306860 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 02:42:42.362728  306860 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1119 02:42:42.362817  306860 start.go:353] cluster config:
	{Name:default-k8s-diff-port-167150 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-167150 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:42:42.364219  306860 out.go:179] * Starting "default-k8s-diff-port-167150" primary control-plane node in "default-k8s-diff-port-167150" cluster
	I1119 02:42:42.367198  306860 cache.go:134] Beginning downloading kic base image for docker with crio
	I1119 02:42:42.368425  306860 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1119 02:42:42.369910  306860 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 02:42:42.369948  306860 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21924-11126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1119 02:42:42.369957  306860 cache.go:65] Caching tarball of preloaded images
	I1119 02:42:42.369996  306860 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1119 02:42:42.370067  306860 preload.go:238] Found /home/jenkins/minikube-integration/21924-11126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1119 02:42:42.370082  306860 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1119 02:42:42.370209  306860 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/config.json ...
	I1119 02:42:42.370241  306860 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/config.json: {Name:mkcddbcc964a690b001741c541d540f001994a84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:42:42.393924  306860 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1119 02:42:42.393944  306860 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1119 02:42:42.393962  306860 cache.go:243] Successfully downloaded all kic artifacts
	I1119 02:42:42.393994  306860 start.go:360] acquireMachinesLock for default-k8s-diff-port-167150: {Name:mk2e469e9e78dab6a8d53f30fec89bc1e449a209 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 02:42:42.394102  306860 start.go:364] duration metric: took 89.942µs to acquireMachinesLock for "default-k8s-diff-port-167150"
	I1119 02:42:42.394130  306860 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-167150 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-167150 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 02:42:42.394220  306860 start.go:125] createHost starting for "" (driver="docker")
	I1119 02:42:39.183788  302848 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21924-11126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-811173:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir: (4.239456846s)
	I1119 02:42:39.183822  302848 kic.go:203] duration metric: took 4.239611554s to extract preloaded images to volume ...
	W1119 02:42:39.183909  302848 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1119 02:42:39.183954  302848 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1119 02:42:39.184001  302848 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1119 02:42:39.255629  302848 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-811173 --name embed-certs-811173 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-811173 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-811173 --network embed-certs-811173 --ip 192.168.85.2 --volume embed-certs-811173:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a
	I1119 02:42:39.648577  302848 cli_runner.go:164] Run: docker container inspect embed-certs-811173 --format={{.State.Running}}
	I1119 02:42:39.668032  302848 cli_runner.go:164] Run: docker container inspect embed-certs-811173 --format={{.State.Status}}
	I1119 02:42:39.687214  302848 cli_runner.go:164] Run: docker exec embed-certs-811173 stat /var/lib/dpkg/alternatives/iptables
	I1119 02:42:39.745898  302848 oci.go:144] the created container "embed-certs-811173" has a running status.
	I1119 02:42:39.745933  302848 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21924-11126/.minikube/machines/embed-certs-811173/id_rsa...
	I1119 02:42:40.188034  302848 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21924-11126/.minikube/machines/embed-certs-811173/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1119 02:42:40.217982  302848 cli_runner.go:164] Run: docker container inspect embed-certs-811173 --format={{.State.Status}}
	I1119 02:42:40.237916  302848 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1119 02:42:40.237940  302848 kic_runner.go:114] Args: [docker exec --privileged embed-certs-811173 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1119 02:42:40.289247  302848 cli_runner.go:164] Run: docker container inspect embed-certs-811173 --format={{.State.Status}}
	I1119 02:42:40.309791  302848 machine.go:94] provisionDockerMachine start ...
	I1119 02:42:40.309919  302848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-811173
	I1119 02:42:40.329857  302848 main.go:143] libmachine: Using SSH client type: native
	I1119 02:42:40.330085  302848 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1119 02:42:40.330094  302848 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 02:42:40.330814  302848 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40226->127.0.0.1:33098: read: connection reset by peer
	I1119 02:42:43.466968  302848 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-811173
	
	I1119 02:42:43.466997  302848 ubuntu.go:182] provisioning hostname "embed-certs-811173"
	I1119 02:42:43.467046  302848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-811173
	I1119 02:42:43.487761  302848 main.go:143] libmachine: Using SSH client type: native
	I1119 02:42:43.488030  302848 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1119 02:42:43.488051  302848 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-811173 && echo "embed-certs-811173" | sudo tee /etc/hostname
	I1119 02:42:43.643097  302848 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-811173
	
	I1119 02:42:43.643198  302848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-811173
	I1119 02:42:43.663378  302848 main.go:143] libmachine: Using SSH client type: native
	I1119 02:42:43.663636  302848 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1119 02:42:43.663655  302848 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-811173' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-811173/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-811173' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 02:42:43.798171  302848 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 02:42:43.798205  302848 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21924-11126/.minikube CaCertPath:/home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21924-11126/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21924-11126/.minikube}
	I1119 02:42:43.798228  302848 ubuntu.go:190] setting up certificates
	I1119 02:42:43.798241  302848 provision.go:84] configureAuth start
	I1119 02:42:43.798305  302848 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-811173
	I1119 02:42:43.819034  302848 provision.go:143] copyHostCerts
	I1119 02:42:43.819102  302848 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-11126/.minikube/cert.pem, removing ...
	I1119 02:42:43.819115  302848 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-11126/.minikube/cert.pem
	I1119 02:42:43.819176  302848 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21924-11126/.minikube/cert.pem (1123 bytes)
	I1119 02:42:43.819262  302848 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-11126/.minikube/key.pem, removing ...
	I1119 02:42:43.819270  302848 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-11126/.minikube/key.pem
	I1119 02:42:43.819297  302848 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21924-11126/.minikube/key.pem (1675 bytes)
	I1119 02:42:43.819360  302848 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-11126/.minikube/ca.pem, removing ...
	I1119 02:42:43.819368  302848 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-11126/.minikube/ca.pem
	I1119 02:42:43.819392  302848 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21924-11126/.minikube/ca.pem (1082 bytes)
	I1119 02:42:43.819475  302848 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21924-11126/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca-key.pem org=jenkins.embed-certs-811173 san=[127.0.0.1 192.168.85.2 embed-certs-811173 localhost minikube]
	I1119 02:42:44.009209  302848 provision.go:177] copyRemoteCerts
	I1119 02:42:44.009280  302848 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 02:42:44.009327  302848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-811173
	I1119 02:42:44.029510  302848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/embed-certs-811173/id_rsa Username:docker}
	I1119 02:42:40.627209  299668 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.471693628s)
	I1119 02:42:40.627247  299668 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1119 02:42:40.627277  299668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1119 02:42:40.627374  299668 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.472009201s)
	I1119 02:42:40.627402  299668 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21924-11126/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1119 02:42:40.627449  299668 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1119 02:42:40.627495  299668 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	I1119 02:42:42.166462  299668 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (1.538920814s)
	I1119 02:42:42.166489  299668 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21924-11126/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1119 02:42:42.166520  299668 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1119 02:42:42.166567  299668 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1119 02:42:43.179025  299668 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1: (1.012437665s)
	I1119 02:42:43.179053  299668 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21924-11126/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1119 02:42:43.179080  299668 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1119 02:42:43.179117  299668 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	I1119 02:42:41.454319  291163 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1119 02:42:41.462416  291163 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1119 02:42:41.462446  291163 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1119 02:42:41.496324  291163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1119 02:42:42.356676  291163 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1119 02:42:42.356833  291163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-987573 minikube.k8s.io/updated_at=2025_11_19T02_42_42_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=ebd49efe8b7d44f89fc96e21beffcd64dfee8277 minikube.k8s.io/name=old-k8s-version-987573 minikube.k8s.io/primary=true
	I1119 02:42:42.356833  291163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:42:42.367139  291163 ops.go:34] apiserver oom_adj: -16
	I1119 02:42:42.457034  291163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:42:42.957751  291163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:42:43.457688  291163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:42:43.957153  291163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:42:44.457654  291163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:42:44.957568  291163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:42:45.457760  291163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:42:42.395695  306860 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1119 02:42:42.395917  306860 start.go:159] libmachine.API.Create for "default-k8s-diff-port-167150" (driver="docker")
	I1119 02:42:42.395950  306860 client.go:173] LocalClient.Create starting
	I1119 02:42:42.396027  306860 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem
	I1119 02:42:42.396063  306860 main.go:143] libmachine: Decoding PEM data...
	I1119 02:42:42.396092  306860 main.go:143] libmachine: Parsing certificate...
	I1119 02:42:42.396166  306860 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21924-11126/.minikube/certs/cert.pem
	I1119 02:42:42.396197  306860 main.go:143] libmachine: Decoding PEM data...
	I1119 02:42:42.396215  306860 main.go:143] libmachine: Parsing certificate...
	I1119 02:42:42.396556  306860 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-167150 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1119 02:42:42.414929  306860 cli_runner.go:211] docker network inspect default-k8s-diff-port-167150 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1119 02:42:42.415012  306860 network_create.go:284] running [docker network inspect default-k8s-diff-port-167150] to gather additional debugging logs...
	I1119 02:42:42.415033  306860 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-167150
	W1119 02:42:42.434734  306860 cli_runner.go:211] docker network inspect default-k8s-diff-port-167150 returned with exit code 1
	I1119 02:42:42.434765  306860 network_create.go:287] error running [docker network inspect default-k8s-diff-port-167150]: docker network inspect default-k8s-diff-port-167150: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-167150 not found
	I1119 02:42:42.434797  306860 network_create.go:289] output of [docker network inspect default-k8s-diff-port-167150]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-167150 not found
	
	** /stderr **
	I1119 02:42:42.434886  306860 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 02:42:42.454554  306860 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-84ce244e4c23 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:16:55:7c:db:e3:4e} reservation:<nil>}
	I1119 02:42:42.455185  306860 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-70e7d73f86d8 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:8a:64:3f:46:8e:7a} reservation:<nil>}
	I1119 02:42:42.455956  306860 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-d7ef477b5a23 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:fa:eb:22:b3:62:92} reservation:<nil>}
	I1119 02:42:42.456451  306860 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-4d7fb52c0aef IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:1a:ad:9c:9a:f3:90} reservation:<nil>}
	I1119 02:42:42.457310  306860 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-3129c4b60559 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:0e:04:d6:88:46:9c} reservation:<nil>}
	I1119 02:42:42.458231  306860 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f14070}
	I1119 02:42:42.458263  306860 network_create.go:124] attempt to create docker network default-k8s-diff-port-167150 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1119 02:42:42.458321  306860 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-167150 default-k8s-diff-port-167150
	I1119 02:42:42.508901  306860 network_create.go:108] docker network default-k8s-diff-port-167150 192.168.94.0/24 created
	I1119 02:42:42.508935  306860 kic.go:121] calculated static IP "192.168.94.2" for the "default-k8s-diff-port-167150" container
	I1119 02:42:42.509018  306860 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1119 02:42:42.530727  306860 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-167150 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-167150 --label created_by.minikube.sigs.k8s.io=true
	I1119 02:42:42.549909  306860 oci.go:103] Successfully created a docker volume default-k8s-diff-port-167150
	I1119 02:42:42.549999  306860 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-167150-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-167150 --entrypoint /usr/bin/test -v default-k8s-diff-port-167150:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -d /var/lib
	I1119 02:42:43.411678  306860 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-167150
	I1119 02:42:43.411748  306860 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 02:42:43.411762  306860 kic.go:194] Starting extracting preloaded images to volume ...
	I1119 02:42:43.411813  306860 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21924-11126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-167150:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir
	I1119 02:42:44.129173  302848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1119 02:42:44.149365  302848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1119 02:42:44.166610  302848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1119 02:42:44.183427  302848 provision.go:87] duration metric: took 385.168944ms to configureAuth
	I1119 02:42:44.183464  302848 ubuntu.go:206] setting minikube options for container-runtime
	I1119 02:42:44.183643  302848 config.go:182] Loaded profile config "embed-certs-811173": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:42:44.183766  302848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-811173
	I1119 02:42:44.202233  302848 main.go:143] libmachine: Using SSH client type: native
	I1119 02:42:44.202417  302848 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1119 02:42:44.202444  302848 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 02:42:44.503275  302848 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 02:42:44.503306  302848 machine.go:97] duration metric: took 4.193483812s to provisionDockerMachine
	I1119 02:42:44.503317  302848 client.go:176] duration metric: took 10.179262279s to LocalClient.Create
	I1119 02:42:44.503337  302848 start.go:167] duration metric: took 10.179334886s to libmachine.API.Create "embed-certs-811173"
	I1119 02:42:44.503346  302848 start.go:293] postStartSetup for "embed-certs-811173" (driver="docker")
	I1119 02:42:44.503358  302848 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 02:42:44.503415  302848 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 02:42:44.503480  302848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-811173
	I1119 02:42:44.526986  302848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/embed-certs-811173/id_rsa Username:docker}
	I1119 02:42:44.639041  302848 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 02:42:44.644425  302848 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 02:42:44.644489  302848 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 02:42:44.644502  302848 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-11126/.minikube/addons for local assets ...
	I1119 02:42:44.644562  302848 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-11126/.minikube/files for local assets ...
	I1119 02:42:44.644662  302848 filesync.go:149] local asset: /home/jenkins/minikube-integration/21924-11126/.minikube/files/etc/ssl/certs/146342.pem -> 146342.pem in /etc/ssl/certs
	I1119 02:42:44.644802  302848 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 02:42:44.657698  302848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/files/etc/ssl/certs/146342.pem --> /etc/ssl/certs/146342.pem (1708 bytes)
	I1119 02:42:44.684627  302848 start.go:296] duration metric: took 181.267139ms for postStartSetup
	I1119 02:42:44.685672  302848 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-811173
	I1119 02:42:44.709637  302848 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/config.json ...
	I1119 02:42:44.709970  302848 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 02:42:44.710086  302848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-811173
	I1119 02:42:44.735883  302848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/embed-certs-811173/id_rsa Username:docker}
	I1119 02:42:44.842589  302848 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 02:42:44.848370  302848 start.go:128] duration metric: took 10.52622031s to createHost
	I1119 02:42:44.848397  302848 start.go:83] releasing machines lock for "embed-certs-811173", held for 10.526348738s
	I1119 02:42:44.848480  302848 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-811173
	I1119 02:42:44.873209  302848 ssh_runner.go:195] Run: cat /version.json
	I1119 02:42:44.873265  302848 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 02:42:44.873267  302848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-811173
	I1119 02:42:44.873325  302848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-811173
	I1119 02:42:44.895290  302848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/embed-certs-811173/id_rsa Username:docker}
	I1119 02:42:44.896255  302848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/embed-certs-811173/id_rsa Username:docker}
	I1119 02:42:45.089046  302848 ssh_runner.go:195] Run: systemctl --version
	I1119 02:42:45.096166  302848 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 02:42:45.135030  302848 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 02:42:45.140127  302848 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 02:42:45.140199  302848 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 02:42:45.170487  302848 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1119 02:42:45.170513  302848 start.go:496] detecting cgroup driver to use...
	I1119 02:42:45.170545  302848 detect.go:190] detected "systemd" cgroup driver on host os
	I1119 02:42:45.170595  302848 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 02:42:45.188031  302848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 02:42:45.201633  302848 docker.go:218] disabling cri-docker service (if available) ...
	I1119 02:42:45.201682  302848 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 02:42:45.219175  302848 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 02:42:45.238631  302848 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 02:42:45.357829  302848 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 02:42:45.467480  302848 docker.go:234] disabling docker service ...
	I1119 02:42:45.467546  302848 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 02:42:45.493546  302848 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 02:42:45.508908  302848 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 02:42:45.630796  302848 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 02:42:45.744606  302848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 02:42:45.758583  302848 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 02:42:45.802834  302848 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 02:42:45.802888  302848 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:42:45.815732  302848 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1119 02:42:45.815833  302848 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:42:45.825707  302848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:42:45.847178  302848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:42:45.877522  302848 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 02:42:45.886218  302848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:42:45.939829  302848 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:42:46.000872  302848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:42:46.058642  302848 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 02:42:46.066800  302848 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 02:42:46.074598  302848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:42:46.154622  302848 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 02:42:49.212232  302848 ssh_runner.go:235] Completed: sudo systemctl restart crio: (3.057563682s)
	I1119 02:42:49.212266  302848 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 02:42:49.212309  302848 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 02:42:49.217067  302848 start.go:564] Will wait 60s for crictl version
	I1119 02:42:49.217124  302848 ssh_runner.go:195] Run: which crictl
	I1119 02:42:49.221132  302848 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 02:42:49.251469  302848 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1119 02:42:49.251561  302848 ssh_runner.go:195] Run: crio --version
	I1119 02:42:49.280463  302848 ssh_runner.go:195] Run: crio --version
	I1119 02:42:49.310498  302848 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1119 02:42:48.297963  299668 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (5.118818905s)
	I1119 02:42:48.297993  299668 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21924-11126/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1119 02:42:48.298019  299668 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1119 02:42:48.298066  299668 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1119 02:42:49.881405  299668 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.583300882s)
	I1119 02:42:49.881450  299668 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21924-11126/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1119 02:42:49.881479  299668 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1119 02:42:49.881558  299668 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1
	I1119 02:42:45.957339  291163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:42:46.457346  291163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:42:46.957840  291163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:42:47.457460  291163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:42:47.957489  291163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:42:48.457490  291163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:42:48.957548  291163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:42:49.457120  291163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:42:49.957332  291163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:42:50.457258  291163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:42:49.311873  302848 cli_runner.go:164] Run: docker network inspect embed-certs-811173 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 02:42:49.337627  302848 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1119 02:42:49.343117  302848 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 02:42:49.363673  302848 kubeadm.go:884] updating cluster {Name:embed-certs-811173 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-811173 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 02:42:49.363803  302848 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 02:42:49.363881  302848 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 02:42:49.402301  302848 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 02:42:49.402327  302848 crio.go:433] Images already preloaded, skipping extraction
	I1119 02:42:49.402381  302848 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 02:42:49.432172  302848 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 02:42:49.432198  302848 cache_images.go:86] Images are preloaded, skipping loading
	I1119 02:42:49.432208  302848 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1119 02:42:49.432312  302848 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-811173 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-811173 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 02:42:49.432394  302848 ssh_runner.go:195] Run: crio config
	I1119 02:42:49.490697  302848 cni.go:84] Creating CNI manager for ""
	I1119 02:42:49.490766  302848 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 02:42:49.490806  302848 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 02:42:49.490847  302848 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-811173 NodeName:embed-certs-811173 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 02:42:49.491024  302848 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-811173"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 02:42:49.491099  302848 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 02:42:49.501687  302848 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 02:42:49.501746  302848 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 02:42:49.512773  302848 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1119 02:42:49.533263  302848 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 02:42:49.552949  302848 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1119 02:42:49.567525  302848 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1119 02:42:49.572161  302848 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 02:42:49.583669  302848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:42:49.696403  302848 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 02:42:49.727028  302848 certs.go:69] Setting up /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173 for IP: 192.168.85.2
	I1119 02:42:49.727140  302848 certs.go:195] generating shared ca certs ...
	I1119 02:42:49.727168  302848 certs.go:227] acquiring lock for ca certs: {Name:mkfa94aafd627ee4c1a185e05aa520339d3c22d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:42:49.727476  302848 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21924-11126/.minikube/ca.key
	I1119 02:42:49.727544  302848 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21924-11126/.minikube/proxy-client-ca.key
	I1119 02:42:49.727557  302848 certs.go:257] generating profile certs ...
	I1119 02:42:49.727625  302848 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/client.key
	I1119 02:42:49.727650  302848 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/client.crt with IP's: []
	I1119 02:42:50.145686  302848 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/client.crt ...
	I1119 02:42:50.145726  302848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/client.crt: {Name:mke65652a37d1645724814d58214d8122c0736b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:42:50.145910  302848 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/client.key ...
	I1119 02:42:50.145933  302848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/client.key: {Name:mk4ef5d0666a41b73aa30b3e0755e11f9f8fb3bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:42:50.146056  302848 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/apiserver.key.a0a915e4
	I1119 02:42:50.146079  302848 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/apiserver.crt.a0a915e4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1119 02:42:50.407271  302848 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/apiserver.crt.a0a915e4 ...
	I1119 02:42:50.407295  302848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/apiserver.crt.a0a915e4: {Name:mk5f035a33d372bd059255b16679fd50e2c33fba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:42:50.407442  302848 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/apiserver.key.a0a915e4 ...
	I1119 02:42:50.407456  302848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/apiserver.key.a0a915e4: {Name:mka92b1af7e6c09f8bfc52286518647800bcb5a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:42:50.407529  302848 certs.go:382] copying /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/apiserver.crt.a0a915e4 -> /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/apiserver.crt
	I1119 02:42:50.407602  302848 certs.go:386] copying /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/apiserver.key.a0a915e4 -> /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/apiserver.key
	I1119 02:42:50.407658  302848 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/proxy-client.key
	I1119 02:42:50.407673  302848 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/proxy-client.crt with IP's: []
	I1119 02:42:51.018427  302848 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/proxy-client.crt ...
	I1119 02:42:51.018475  302848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/proxy-client.crt: {Name:mkaf83dc022cbae8f555c0ae724724cf38e2e4bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:42:51.018641  302848 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/proxy-client.key ...
	I1119 02:42:51.018703  302848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/proxy-client.key: {Name:mk810704305f00f9b6af79898dc7dd3a9f2fe056 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:42:51.018949  302848 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/14634.pem (1338 bytes)
	W1119 02:42:51.019001  302848 certs.go:480] ignoring /home/jenkins/minikube-integration/21924-11126/.minikube/certs/14634_empty.pem, impossibly tiny 0 bytes
	I1119 02:42:51.019016  302848 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca-key.pem (1671 bytes)
	I1119 02:42:51.019050  302848 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem (1082 bytes)
	I1119 02:42:51.019085  302848 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/cert.pem (1123 bytes)
	I1119 02:42:51.019116  302848 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/key.pem (1675 bytes)
	I1119 02:42:51.019168  302848 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/files/etc/ssl/certs/146342.pem (1708 bytes)
	I1119 02:42:51.019875  302848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 02:42:51.045884  302848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1119 02:42:51.068119  302848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 02:42:51.085405  302848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1119 02:42:51.102412  302848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1119 02:42:51.119942  302848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1119 02:42:51.141845  302848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 02:42:51.163668  302848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1119 02:42:51.185276  302848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/files/etc/ssl/certs/146342.pem --> /usr/share/ca-certificates/146342.pem (1708 bytes)
	I1119 02:42:51.206376  302848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 02:42:51.223822  302848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/certs/14634.pem --> /usr/share/ca-certificates/14634.pem (1338 bytes)
	I1119 02:42:51.240933  302848 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 02:42:51.254070  302848 ssh_runner.go:195] Run: openssl version
	I1119 02:42:51.260133  302848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 02:42:51.268759  302848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:42:51.272373  302848 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 01:56 /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:42:51.272418  302848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:42:51.314661  302848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 02:42:51.325625  302848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14634.pem && ln -fs /usr/share/ca-certificates/14634.pem /etc/ssl/certs/14634.pem"
	I1119 02:42:51.335401  302848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14634.pem
	I1119 02:42:51.339792  302848 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 02:02 /usr/share/ca-certificates/14634.pem
	I1119 02:42:51.339844  302848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14634.pem
	I1119 02:42:51.374219  302848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14634.pem /etc/ssl/certs/51391683.0"
	I1119 02:42:51.382719  302848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/146342.pem && ln -fs /usr/share/ca-certificates/146342.pem /etc/ssl/certs/146342.pem"
	I1119 02:42:51.391325  302848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/146342.pem
	I1119 02:42:51.395186  302848 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 02:02 /usr/share/ca-certificates/146342.pem
	I1119 02:42:51.395235  302848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/146342.pem
	I1119 02:42:51.433387  302848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/146342.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 02:42:51.441878  302848 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 02:42:51.446149  302848 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1119 02:42:51.446206  302848 kubeadm.go:401] StartCluster: {Name:embed-certs-811173 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-811173 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:42:51.446288  302848 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 02:42:51.446341  302848 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 02:42:51.474545  302848 cri.go:89] found id: ""
	I1119 02:42:51.474598  302848 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 02:42:51.483078  302848 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1119 02:42:51.491910  302848 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1119 02:42:51.491960  302848 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1119 02:42:51.500593  302848 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1119 02:42:51.500610  302848 kubeadm.go:158] found existing configuration files:
	
	I1119 02:42:51.500655  302848 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1119 02:42:51.508497  302848 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1119 02:42:51.508546  302848 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1119 02:42:51.516422  302848 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1119 02:42:51.525757  302848 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1119 02:42:51.525807  302848 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1119 02:42:51.536275  302848 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1119 02:42:51.545935  302848 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1119 02:42:51.545987  302848 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1119 02:42:51.554976  302848 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1119 02:42:51.563559  302848 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1119 02:42:51.563604  302848 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1119 02:42:51.570652  302848 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1119 02:42:51.615030  302848 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1119 02:42:51.615151  302848 kubeadm.go:319] [preflight] Running pre-flight checks
	I1119 02:42:51.639511  302848 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1119 02:42:51.639676  302848 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1119 02:42:51.639872  302848 kubeadm.go:319] OS: Linux
	I1119 02:42:51.639979  302848 kubeadm.go:319] CGROUPS_CPU: enabled
	I1119 02:42:51.640073  302848 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1119 02:42:51.640147  302848 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1119 02:42:51.640208  302848 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1119 02:42:51.640267  302848 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1119 02:42:51.640326  302848 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1119 02:42:51.640387  302848 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1119 02:42:51.640451  302848 kubeadm.go:319] CGROUPS_IO: enabled
	I1119 02:42:51.708966  302848 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1119 02:42:51.709135  302848 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1119 02:42:51.709283  302848 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1119 02:42:51.716801  302848 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1119 02:42:49.083522  306860 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21924-11126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-167150:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir: (5.671639706s)
	I1119 02:42:49.083553  306860 kic.go:203] duration metric: took 5.671789118s to extract preloaded images to volume ...
	W1119 02:42:49.083624  306860 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1119 02:42:49.083651  306860 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1119 02:42:49.083684  306860 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1119 02:42:49.149882  306860 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-167150 --name default-k8s-diff-port-167150 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-167150 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-167150 --network default-k8s-diff-port-167150 --ip 192.168.94.2 --volume default-k8s-diff-port-167150:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a
	I1119 02:42:49.500594  306860 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-167150 --format={{.State.Running}}
	I1119 02:42:49.523895  306860 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-167150 --format={{.State.Status}}
	I1119 02:42:49.547442  306860 cli_runner.go:164] Run: docker exec default-k8s-diff-port-167150 stat /var/lib/dpkg/alternatives/iptables
	I1119 02:42:49.600101  306860 oci.go:144] the created container "default-k8s-diff-port-167150" has a running status.
	I1119 02:42:49.600142  306860 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21924-11126/.minikube/machines/default-k8s-diff-port-167150/id_rsa...
	I1119 02:42:50.269489  306860 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21924-11126/.minikube/machines/default-k8s-diff-port-167150/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1119 02:42:50.295459  306860 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-167150 --format={{.State.Status}}
	I1119 02:42:50.315528  306860 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1119 02:42:50.315562  306860 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-167150 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1119 02:42:50.356860  306860 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-167150 --format={{.State.Status}}
	I1119 02:42:50.374600  306860 machine.go:94] provisionDockerMachine start ...
	I1119 02:42:50.374689  306860 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-167150
	I1119 02:42:50.391114  306860 main.go:143] libmachine: Using SSH client type: native
	I1119 02:42:50.391363  306860 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1119 02:42:50.391382  306860 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 02:42:50.523354  306860 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-167150
	
	I1119 02:42:50.523388  306860 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-167150"
	I1119 02:42:50.523491  306860 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-167150
	I1119 02:42:50.548578  306860 main.go:143] libmachine: Using SSH client type: native
	I1119 02:42:50.549009  306860 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1119 02:42:50.549031  306860 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-167150 && echo "default-k8s-diff-port-167150" | sudo tee /etc/hostname
	I1119 02:42:50.708967  306860 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-167150
	
	I1119 02:42:50.709056  306860 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-167150
	I1119 02:42:50.729860  306860 main.go:143] libmachine: Using SSH client type: native
	I1119 02:42:50.730154  306860 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1119 02:42:50.730186  306860 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-167150' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-167150/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-167150' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 02:42:50.877302  306860 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 02:42:50.877332  306860 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21924-11126/.minikube CaCertPath:/home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21924-11126/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21924-11126/.minikube}
	I1119 02:42:50.877354  306860 ubuntu.go:190] setting up certificates
	I1119 02:42:50.877366  306860 provision.go:84] configureAuth start
	I1119 02:42:50.877421  306860 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-167150
	I1119 02:42:50.899681  306860 provision.go:143] copyHostCerts
	I1119 02:42:50.899742  306860 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-11126/.minikube/cert.pem, removing ...
	I1119 02:42:50.899755  306860 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-11126/.minikube/cert.pem
	I1119 02:42:50.899823  306860 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21924-11126/.minikube/cert.pem (1123 bytes)
	I1119 02:42:50.899935  306860 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-11126/.minikube/key.pem, removing ...
	I1119 02:42:50.899952  306860 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-11126/.minikube/key.pem
	I1119 02:42:50.899994  306860 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21924-11126/.minikube/key.pem (1675 bytes)
	I1119 02:42:50.900091  306860 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-11126/.minikube/ca.pem, removing ...
	I1119 02:42:50.900100  306860 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-11126/.minikube/ca.pem
	I1119 02:42:50.900133  306860 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21924-11126/.minikube/ca.pem (1082 bytes)
	I1119 02:42:50.900206  306860 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21924-11126/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-167150 san=[127.0.0.1 192.168.94.2 default-k8s-diff-port-167150 localhost minikube]
	I1119 02:42:51.790042  306860 provision.go:177] copyRemoteCerts
	I1119 02:42:51.790120  306860 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 02:42:51.790163  306860 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-167150
	I1119 02:42:51.812679  306860 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/default-k8s-diff-port-167150/id_rsa Username:docker}
	I1119 02:42:51.914566  306860 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1119 02:42:51.933520  306860 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1119 02:42:51.951210  306860 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1119 02:42:51.972791  306860 provision.go:87] duration metric: took 1.095412973s to configureAuth
	I1119 02:42:51.972820  306860 ubuntu.go:206] setting minikube options for container-runtime
	I1119 02:42:51.973010  306860 config.go:182] Loaded profile config "default-k8s-diff-port-167150": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:42:51.973126  306860 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-167150
	I1119 02:42:51.993887  306860 main.go:143] libmachine: Using SSH client type: native
	I1119 02:42:51.994333  306860 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1119 02:42:51.994382  306860 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 02:42:51.720233  302848 out.go:252]   - Generating certificates and keys ...
	I1119 02:42:51.720329  302848 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1119 02:42:51.720424  302848 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1119 02:42:52.110567  302848 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1119 02:42:52.469402  302848 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1119 02:42:52.783731  302848 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1119 02:42:53.170607  302848 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1119 02:42:53.607637  302848 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1119 02:42:53.607789  302848 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-811173 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1119 02:42:52.305265  306860 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 02:42:52.305290  306860 machine.go:97] duration metric: took 1.930670923s to provisionDockerMachine
	I1119 02:42:52.305303  306860 client.go:176] duration metric: took 9.909346044s to LocalClient.Create
	I1119 02:42:52.305321  306860 start.go:167] duration metric: took 9.909403032s to libmachine.API.Create "default-k8s-diff-port-167150"
	I1119 02:42:52.305331  306860 start.go:293] postStartSetup for "default-k8s-diff-port-167150" (driver="docker")
	I1119 02:42:52.305347  306860 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 02:42:52.305414  306860 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 02:42:52.305477  306860 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-167150
	I1119 02:42:52.326893  306860 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/default-k8s-diff-port-167150/id_rsa Username:docker}
	I1119 02:42:52.427784  306860 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 02:42:52.432280  306860 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 02:42:52.432314  306860 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 02:42:52.432326  306860 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-11126/.minikube/addons for local assets ...
	I1119 02:42:52.432378  306860 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-11126/.minikube/files for local assets ...
	I1119 02:42:52.432493  306860 filesync.go:149] local asset: /home/jenkins/minikube-integration/21924-11126/.minikube/files/etc/ssl/certs/146342.pem -> 146342.pem in /etc/ssl/certs
	I1119 02:42:52.432606  306860 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 02:42:52.440486  306860 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/files/etc/ssl/certs/146342.pem --> /etc/ssl/certs/146342.pem (1708 bytes)
	I1119 02:42:52.461537  306860 start.go:296] duration metric: took 156.190397ms for postStartSetup
	I1119 02:42:52.461851  306860 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-167150
	I1119 02:42:52.483860  306860 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/config.json ...
	I1119 02:42:52.484137  306860 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 02:42:52.484184  306860 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-167150
	I1119 02:42:52.504090  306860 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/default-k8s-diff-port-167150/id_rsa Username:docker}
	I1119 02:42:52.602388  306860 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 02:42:52.607059  306860 start.go:128] duration metric: took 10.212819294s to createHost
	I1119 02:42:52.607086  306860 start.go:83] releasing machines lock for "default-k8s-diff-port-167150", held for 10.212970587s
	I1119 02:42:52.607148  306860 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-167150
	I1119 02:42:52.626059  306860 ssh_runner.go:195] Run: cat /version.json
	I1119 02:42:52.626109  306860 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-167150
	I1119 02:42:52.626132  306860 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 02:42:52.626195  306860 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-167150
	I1119 02:42:52.646677  306860 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/default-k8s-diff-port-167150/id_rsa Username:docker}
	I1119 02:42:52.647867  306860 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/default-k8s-diff-port-167150/id_rsa Username:docker}
	I1119 02:42:52.822035  306860 ssh_runner.go:195] Run: systemctl --version
	I1119 02:42:52.831419  306860 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 02:42:52.869148  306860 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 02:42:52.873990  306860 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 02:42:52.874068  306860 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 02:42:52.901044  306860 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1119 02:42:52.901066  306860 start.go:496] detecting cgroup driver to use...
	I1119 02:42:52.901097  306860 detect.go:190] detected "systemd" cgroup driver on host os
	I1119 02:42:52.901141  306860 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 02:42:52.917792  306860 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 02:42:52.932809  306860 docker.go:218] disabling cri-docker service (if available) ...
	I1119 02:42:52.932864  306860 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 02:42:52.953113  306860 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 02:42:52.974059  306860 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 02:42:53.085982  306860 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 02:42:53.191486  306860 docker.go:234] disabling docker service ...
	I1119 02:42:53.191545  306860 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 02:42:53.209965  306860 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 02:42:53.222536  306860 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 02:42:53.334426  306860 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 02:42:53.452134  306860 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 02:42:53.470021  306860 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 02:42:53.491692  306860 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 02:42:53.491759  306860 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:42:53.507808  306860 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1119 02:42:53.507878  306860 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:42:53.521160  306860 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:42:53.533686  306860 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:42:53.545419  306860 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 02:42:53.559221  306860 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:42:53.572537  306860 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:42:53.591930  306860 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:42:53.604233  306860 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 02:42:53.612761  306860 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 02:42:53.620567  306860 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:42:53.702418  306860 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 02:42:54.895903  306860 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.19344223s)
	I1119 02:42:54.895934  306860 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 02:42:54.895987  306860 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 02:42:54.899921  306860 start.go:564] Will wait 60s for crictl version
	I1119 02:42:54.899979  306860 ssh_runner.go:195] Run: which crictl
	I1119 02:42:54.903499  306860 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 02:42:54.927965  306860 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1119 02:42:54.928037  306860 ssh_runner.go:195] Run: crio --version
	I1119 02:42:54.960299  306860 ssh_runner.go:195] Run: crio --version
	I1119 02:42:55.000689  306860 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1119 02:42:51.242518  299668 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1: (1.36093376s)
	I1119 02:42:51.242553  299668 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21924-11126/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1119 02:42:51.242587  299668 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1119 02:42:51.242638  299668 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1119 02:42:51.884817  299668 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21924-11126/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1119 02:42:51.884865  299668 cache_images.go:125] Successfully loaded all cached images
	I1119 02:42:51.884872  299668 cache_images.go:94] duration metric: took 16.678403063s to LoadCachedImages
	I1119 02:42:51.884886  299668 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1119 02:42:51.884977  299668 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-837474 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-837474 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 02:42:51.885077  299668 ssh_runner.go:195] Run: crio config
	I1119 02:42:51.934055  299668 cni.go:84] Creating CNI manager for ""
	I1119 02:42:51.934075  299668 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 02:42:51.934089  299668 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 02:42:51.934107  299668 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-837474 NodeName:no-preload-837474 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 02:42:51.934256  299668 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-837474"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 02:42:51.934344  299668 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 02:42:51.942351  299668 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1119 02:42:51.942409  299668 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1119 02:42:51.950268  299668 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
	I1119 02:42:51.950341  299668 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/21924-11126/.minikube/cache/linux/amd64/v1.34.1/kubelet
	I1119 02:42:51.950376  299668 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21924-11126/.minikube/cache/linux/amd64/v1.34.1/kubeadm
	I1119 02:42:51.950348  299668 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1119 02:42:51.954459  299668 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1119 02:42:51.954493  299668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/cache/linux/amd64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (60559544 bytes)
	I1119 02:42:53.238137  299668 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 02:42:53.257679  299668 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1119 02:42:53.263721  299668 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1119 02:42:53.263752  299668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/cache/linux/amd64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (59195684 bytes)
	I1119 02:42:53.344069  299668 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1119 02:42:53.351667  299668 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1119 02:42:53.351703  299668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/cache/linux/amd64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (74027192 bytes)
	I1119 02:42:53.612715  299668 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 02:42:53.620479  299668 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1119 02:42:53.633087  299668 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 02:42:53.657867  299668 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1119 02:42:53.670102  299668 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1119 02:42:53.673427  299668 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 02:42:53.683353  299668 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:42:53.768236  299668 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 02:42:53.789788  299668 certs.go:69] Setting up /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474 for IP: 192.168.103.2
	I1119 02:42:53.789809  299668 certs.go:195] generating shared ca certs ...
	I1119 02:42:53.789829  299668 certs.go:227] acquiring lock for ca certs: {Name:mkfa94aafd627ee4c1a185e05aa520339d3c22d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:42:53.789987  299668 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21924-11126/.minikube/ca.key
	I1119 02:42:53.790033  299668 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21924-11126/.minikube/proxy-client-ca.key
	I1119 02:42:53.790044  299668 certs.go:257] generating profile certs ...
	I1119 02:42:53.790109  299668 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/client.key
	I1119 02:42:53.790124  299668 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/client.crt with IP's: []
	I1119 02:42:54.153349  299668 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/client.crt ...
	I1119 02:42:54.153376  299668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/client.crt: {Name:mk582fda973473014e16fbac704f7616a0f6aa62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:42:54.162415  299668 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/client.key ...
	I1119 02:42:54.162455  299668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/client.key: {Name:mkf82ec201b7ec108f85e3c1cb709e2e0c644536 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:42:54.162615  299668 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/apiserver.key.2f093449
	I1119 02:42:54.162634  299668 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/apiserver.crt.2f093449 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1119 02:42:50.957718  291163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:42:51.457622  291163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:42:51.958197  291163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:42:52.457608  291163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:42:52.957737  291163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:42:53.457646  291163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:42:53.957900  291163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:42:54.457538  291163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:42:54.957631  291163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:42:55.052333  291163 kubeadm.go:1114] duration metric: took 12.695568902s to wait for elevateKubeSystemPrivileges
	I1119 02:42:55.052368  291163 kubeadm.go:403] duration metric: took 26.311686714s to StartCluster
	I1119 02:42:55.052395  291163 settings.go:142] acquiring lock: {Name:mk39bd61177f2f8808b442f427cccc3032092975 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:42:55.052484  291163 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21924-11126/kubeconfig
	I1119 02:42:55.053537  291163 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/kubeconfig: {Name:mkb0d0ec188d51add3fa67ae624c9c11581068d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:42:55.053789  291163 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1119 02:42:55.053803  291163 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 02:42:55.053872  291163 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 02:42:55.053963  291163 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-987573"
	I1119 02:42:55.053987  291163 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-987573"
	I1119 02:42:55.054018  291163 host.go:66] Checking if "old-k8s-version-987573" exists ...
	I1119 02:42:55.054054  291163 config.go:182] Loaded profile config "old-k8s-version-987573": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1119 02:42:55.054262  291163 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-987573"
	I1119 02:42:55.054313  291163 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-987573"
	I1119 02:42:55.054691  291163 cli_runner.go:164] Run: docker container inspect old-k8s-version-987573 --format={{.State.Status}}
	I1119 02:42:55.054736  291163 cli_runner.go:164] Run: docker container inspect old-k8s-version-987573 --format={{.State.Status}}
	I1119 02:42:55.058586  291163 out.go:179] * Verifying Kubernetes components...
	I1119 02:42:55.060065  291163 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:42:55.084656  291163 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-987573"
	I1119 02:42:55.084747  291163 host.go:66] Checking if "old-k8s-version-987573" exists ...
	I1119 02:42:55.085405  291163 cli_runner.go:164] Run: docker container inspect old-k8s-version-987573 --format={{.State.Status}}
	I1119 02:42:55.085634  291163 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 02:42:55.086927  291163 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 02:42:55.086947  291163 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 02:42:55.086995  291163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-987573
	I1119 02:42:55.121554  291163 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 02:42:55.121580  291163 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 02:42:55.121762  291163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-987573
	I1119 02:42:55.128371  291163 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/old-k8s-version-987573/id_rsa Username:docker}
	I1119 02:42:55.160205  291163 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/old-k8s-version-987573/id_rsa Username:docker}
	I1119 02:42:55.181208  291163 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1119 02:42:55.259110  291163 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 02:42:55.264651  291163 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 02:42:55.282490  291163 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 02:42:55.568676  291163 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1119 02:42:55.569719  291163 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-987573" to be "Ready" ...
	I1119 02:42:55.795625  291163 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1119 02:42:55.796948  291163 addons.go:515] duration metric: took 743.057906ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1119 02:42:54.248395  302848 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1119 02:42:54.248580  302848 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-811173 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1119 02:42:54.313308  302848 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1119 02:42:54.706382  302848 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1119 02:42:54.983151  302848 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1119 02:42:54.983371  302848 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1119 02:42:55.301965  302848 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1119 02:42:55.490617  302848 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1119 02:42:55.599136  302848 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1119 02:42:55.872895  302848 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1119 02:42:56.305311  302848 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1119 02:42:56.308494  302848 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1119 02:42:56.312387  302848 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1119 02:42:55.174521  299668 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/apiserver.crt.2f093449 ...
	I1119 02:42:55.174557  299668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/apiserver.crt.2f093449: {Name:mk5097a5f345e6abc2d685019cd0e0e0dd64d577 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:42:55.174776  299668 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/apiserver.key.2f093449 ...
	I1119 02:42:55.174793  299668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/apiserver.key.2f093449: {Name:mkab8fc1530b6e08d3a7078856d1f9ebfde15951 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:42:55.174905  299668 certs.go:382] copying /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/apiserver.crt.2f093449 -> /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/apiserver.crt
	I1119 02:42:55.174995  299668 certs.go:386] copying /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/apiserver.key.2f093449 -> /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/apiserver.key
	I1119 02:42:55.175062  299668 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/proxy-client.key
	I1119 02:42:55.175088  299668 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/proxy-client.crt with IP's: []
	I1119 02:42:55.677842  299668 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/proxy-client.crt ...
	I1119 02:42:55.677879  299668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/proxy-client.crt: {Name:mkecc3d139808fcfd56c1c505daef9b4314f266d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:42:55.678058  299668 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/proxy-client.key ...
	I1119 02:42:55.678074  299668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/proxy-client.key: {Name:mkfd946463670be5706400ebe2ff5e4540ed9b5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:42:55.678301  299668 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/14634.pem (1338 bytes)
	W1119 02:42:55.678346  299668 certs.go:480] ignoring /home/jenkins/minikube-integration/21924-11126/.minikube/certs/14634_empty.pem, impossibly tiny 0 bytes
	I1119 02:42:55.678360  299668 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca-key.pem (1671 bytes)
	I1119 02:42:55.678394  299668 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem (1082 bytes)
	I1119 02:42:55.678425  299668 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/cert.pem (1123 bytes)
	I1119 02:42:55.678472  299668 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/key.pem (1675 bytes)
	I1119 02:42:55.678534  299668 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/files/etc/ssl/certs/146342.pem (1708 bytes)
	I1119 02:42:55.679296  299668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 02:42:55.700801  299668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1119 02:42:55.720342  299668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 02:42:55.741236  299668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1119 02:42:55.764042  299668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1119 02:42:55.785834  299668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1671 bytes)
	I1119 02:42:55.807648  299668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 02:42:55.827008  299668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1119 02:42:55.845962  299668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 02:42:55.864695  299668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/certs/14634.pem --> /usr/share/ca-certificates/14634.pem (1338 bytes)
	I1119 02:42:55.881798  299668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/files/etc/ssl/certs/146342.pem --> /usr/share/ca-certificates/146342.pem (1708 bytes)
	I1119 02:42:55.898727  299668 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 02:42:55.910163  299668 ssh_runner.go:195] Run: openssl version
	I1119 02:42:55.915785  299668 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14634.pem && ln -fs /usr/share/ca-certificates/14634.pem /etc/ssl/certs/14634.pem"
	I1119 02:42:55.923580  299668 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14634.pem
	I1119 02:42:55.926945  299668 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 02:02 /usr/share/ca-certificates/14634.pem
	I1119 02:42:55.927022  299668 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14634.pem
	I1119 02:42:55.969227  299668 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14634.pem /etc/ssl/certs/51391683.0"
	I1119 02:42:55.978464  299668 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/146342.pem && ln -fs /usr/share/ca-certificates/146342.pem /etc/ssl/certs/146342.pem"
	I1119 02:42:55.988370  299668 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/146342.pem
	I1119 02:42:55.992980  299668 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 02:02 /usr/share/ca-certificates/146342.pem
	I1119 02:42:55.993028  299668 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/146342.pem
	I1119 02:42:56.051633  299668 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/146342.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 02:42:56.065808  299668 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 02:42:56.079199  299668 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:42:56.084981  299668 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 01:56 /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:42:56.085033  299668 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:42:56.140499  299668 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 02:42:56.151987  299668 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 02:42:56.156998  299668 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1119 02:42:56.157063  299668 kubeadm.go:401] StartCluster: {Name:no-preload-837474 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-837474 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:42:56.157164  299668 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 02:42:56.157224  299668 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 02:42:56.191409  299668 cri.go:89] found id: ""
	I1119 02:42:56.191487  299668 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 02:42:56.203572  299668 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1119 02:42:56.214503  299668 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1119 02:42:56.214560  299668 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1119 02:42:56.224485  299668 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1119 02:42:56.224520  299668 kubeadm.go:158] found existing configuration files:
	
	I1119 02:42:56.224563  299668 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1119 02:42:56.234337  299668 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1119 02:42:56.234389  299668 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1119 02:42:56.243718  299668 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1119 02:42:56.254141  299668 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1119 02:42:56.254192  299668 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1119 02:42:56.263696  299668 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1119 02:42:56.273116  299668 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1119 02:42:56.273160  299668 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1119 02:42:56.281275  299668 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1119 02:42:56.290803  299668 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1119 02:42:56.290848  299668 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1119 02:42:56.300377  299668 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1119 02:42:56.355983  299668 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1119 02:42:56.356057  299668 kubeadm.go:319] [preflight] Running pre-flight checks
	I1119 02:42:56.389799  299668 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1119 02:42:56.389890  299668 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1119 02:42:56.389940  299668 kubeadm.go:319] OS: Linux
	I1119 02:42:56.390011  299668 kubeadm.go:319] CGROUPS_CPU: enabled
	I1119 02:42:56.390069  299668 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1119 02:42:56.390131  299668 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1119 02:42:56.390190  299668 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1119 02:42:56.390253  299668 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1119 02:42:56.390334  299668 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1119 02:42:56.390396  299668 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1119 02:42:56.390484  299668 kubeadm.go:319] CGROUPS_IO: enabled
	I1119 02:42:56.476300  299668 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1119 02:42:56.476471  299668 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1119 02:42:56.476678  299668 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1119 02:42:56.498223  299668 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1119 02:42:55.001904  306860 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-167150 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 02:42:55.019551  306860 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1119 02:42:55.023819  306860 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 02:42:55.035169  306860 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-167150 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-167150 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 02:42:55.035294  306860 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 02:42:55.035349  306860 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 02:42:55.082998  306860 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 02:42:55.083033  306860 crio.go:433] Images already preloaded, skipping extraction
	I1119 02:42:55.083093  306860 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 02:42:55.133091  306860 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 02:42:55.133117  306860 cache_images.go:86] Images are preloaded, skipping loading
	I1119 02:42:55.133127  306860 kubeadm.go:935] updating node { 192.168.94.2 8444 v1.34.1 crio true true} ...
	I1119 02:42:55.133229  306860 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-167150 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-167150 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 02:42:55.133303  306860 ssh_runner.go:195] Run: crio config
	I1119 02:42:55.202350  306860 cni.go:84] Creating CNI manager for ""
	I1119 02:42:55.202422  306860 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 02:42:55.202527  306860 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 02:42:55.202583  306860 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-167150 NodeName:default-k8s-diff-port-167150 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 02:42:55.202750  306860 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-167150"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 02:42:55.202816  306860 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 02:42:55.212677  306860 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 02:42:55.212740  306860 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 02:42:55.222763  306860 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1119 02:42:55.238734  306860 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 02:42:55.263173  306860 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1119 02:42:55.284386  306860 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1119 02:42:55.294186  306860 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 02:42:55.309928  306860 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:42:55.457096  306860 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 02:42:55.486617  306860 certs.go:69] Setting up /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150 for IP: 192.168.94.2
	I1119 02:42:55.486643  306860 certs.go:195] generating shared ca certs ...
	I1119 02:42:55.486664  306860 certs.go:227] acquiring lock for ca certs: {Name:mkfa94aafd627ee4c1a185e05aa520339d3c22d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:42:55.486870  306860 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21924-11126/.minikube/ca.key
	I1119 02:42:55.486993  306860 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21924-11126/.minikube/proxy-client-ca.key
	I1119 02:42:55.487012  306860 certs.go:257] generating profile certs ...
	I1119 02:42:55.487088  306860 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/client.key
	I1119 02:42:55.487102  306860 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/client.crt with IP's: []
	I1119 02:42:56.094930  306860 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/client.crt ...
	I1119 02:42:56.094965  306860 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/client.crt: {Name:mk026804441dc7b69d5672d318a7041c3c66d037 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:42:56.095134  306860 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/client.key ...
	I1119 02:42:56.095149  306860 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/client.key: {Name:mk48f5330ed931b78c15c78cffd61daf6c38116c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:42:56.095247  306860 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/apiserver.key.c3ecd8f4
	I1119 02:42:56.095265  306860 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/apiserver.crt.c3ecd8f4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1119 02:42:56.225092  306860 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/apiserver.crt.c3ecd8f4 ...
	I1119 02:42:56.225159  306860 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/apiserver.crt.c3ecd8f4: {Name:mk96b6176b7d10d9bf2189cc1a892c03f023c6bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:42:56.225342  306860 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/apiserver.key.c3ecd8f4 ...
	I1119 02:42:56.225363  306860 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/apiserver.key.c3ecd8f4: {Name:mk1968f40809874a1e5baaa63347f3037839ec18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:42:56.225677  306860 certs.go:382] copying /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/apiserver.crt.c3ecd8f4 -> /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/apiserver.crt
	I1119 02:42:56.225860  306860 certs.go:386] copying /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/apiserver.key.c3ecd8f4 -> /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/apiserver.key
	I1119 02:42:56.226000  306860 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/proxy-client.key
	I1119 02:42:56.226018  306860 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/proxy-client.crt with IP's: []
	I1119 02:42:56.364736  306860 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/proxy-client.crt ...
	I1119 02:42:56.364766  306860 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/proxy-client.crt: {Name:mk250838ee0813d8a1018cfdbc728e6a6682cbe2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:42:56.364947  306860 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/proxy-client.key ...
	I1119 02:42:56.364966  306860 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/proxy-client.key: {Name:mkf8d3d5c9e799a5f275d845a37b4700ad82ae66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:42:56.365187  306860 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/14634.pem (1338 bytes)
	W1119 02:42:56.365235  306860 certs.go:480] ignoring /home/jenkins/minikube-integration/21924-11126/.minikube/certs/14634_empty.pem, impossibly tiny 0 bytes
	I1119 02:42:56.365250  306860 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca-key.pem (1671 bytes)
	I1119 02:42:56.365288  306860 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem (1082 bytes)
	I1119 02:42:56.365320  306860 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/cert.pem (1123 bytes)
	I1119 02:42:56.365352  306860 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/key.pem (1675 bytes)
	I1119 02:42:56.365408  306860 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/files/etc/ssl/certs/146342.pem (1708 bytes)
	I1119 02:42:56.365996  306860 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 02:42:56.390329  306860 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1119 02:42:56.417649  306860 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 02:42:56.439510  306860 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1119 02:42:56.464545  306860 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1119 02:42:56.495174  306860 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1119 02:42:56.522898  306860 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 02:42:56.545477  306860 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1119 02:42:56.569966  306860 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/certs/14634.pem --> /usr/share/ca-certificates/14634.pem (1338 bytes)
	I1119 02:42:56.596790  306860 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/files/etc/ssl/certs/146342.pem --> /usr/share/ca-certificates/146342.pem (1708 bytes)
	I1119 02:42:56.618988  306860 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 02:42:56.641382  306860 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 02:42:56.659625  306860 ssh_runner.go:195] Run: openssl version
	I1119 02:42:56.667985  306860 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/146342.pem && ln -fs /usr/share/ca-certificates/146342.pem /etc/ssl/certs/146342.pem"
	I1119 02:42:56.677102  306860 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/146342.pem
	I1119 02:42:56.680868  306860 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 02:02 /usr/share/ca-certificates/146342.pem
	I1119 02:42:56.680921  306860 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/146342.pem
	I1119 02:42:56.728253  306860 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/146342.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 02:42:56.738101  306860 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 02:42:56.748790  306860 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:42:56.753545  306860 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 01:56 /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:42:56.753606  306860 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:42:56.810205  306860 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 02:42:56.821949  306860 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14634.pem && ln -fs /usr/share/ca-certificates/14634.pem /etc/ssl/certs/14634.pem"
	I1119 02:42:56.833110  306860 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14634.pem
	I1119 02:42:56.838128  306860 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 02:02 /usr/share/ca-certificates/14634.pem
	I1119 02:42:56.838183  306860 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14634.pem
	I1119 02:42:56.891211  306860 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14634.pem /etc/ssl/certs/51391683.0"
	I1119 02:42:56.903114  306860 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 02:42:56.907959  306860 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1119 02:42:56.908012  306860 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-167150 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-167150 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:42:56.908102  306860 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 02:42:56.908149  306860 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 02:42:56.940502  306860 cri.go:89] found id: ""
	I1119 02:42:56.940561  306860 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 02:42:56.950549  306860 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1119 02:42:56.960914  306860 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1119 02:42:56.960969  306860 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1119 02:42:56.971164  306860 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1119 02:42:56.971180  306860 kubeadm.go:158] found existing configuration files:
	
	I1119 02:42:56.971221  306860 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1119 02:42:56.981206  306860 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1119 02:42:56.981266  306860 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1119 02:42:56.990677  306860 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1119 02:42:57.001004  306860 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1119 02:42:57.001054  306860 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1119 02:42:57.011142  306860 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1119 02:42:57.022773  306860 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1119 02:42:57.022824  306860 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1119 02:42:57.033930  306860 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1119 02:42:57.043496  306860 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1119 02:42:57.043549  306860 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1119 02:42:57.052850  306860 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1119 02:42:57.102312  306860 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1119 02:42:57.102384  306860 kubeadm.go:319] [preflight] Running pre-flight checks
	I1119 02:42:57.124619  306860 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1119 02:42:57.124731  306860 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1119 02:42:57.124806  306860 kubeadm.go:319] OS: Linux
	I1119 02:42:57.124877  306860 kubeadm.go:319] CGROUPS_CPU: enabled
	I1119 02:42:57.124940  306860 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1119 02:42:57.125010  306860 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1119 02:42:57.125075  306860 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1119 02:42:57.125121  306860 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1119 02:42:57.125176  306860 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1119 02:42:57.125246  306860 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1119 02:42:57.125304  306860 kubeadm.go:319] CGROUPS_IO: enabled
	I1119 02:42:57.195789  306860 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1119 02:42:57.195928  306860 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1119 02:42:57.196075  306860 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1119 02:42:57.203186  306860 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1119 02:42:56.313962  302848 out.go:252]   - Booting up control plane ...
	I1119 02:42:56.314089  302848 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1119 02:42:56.314233  302848 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1119 02:42:56.315640  302848 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1119 02:42:56.334919  302848 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1119 02:42:56.335093  302848 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1119 02:42:56.347888  302848 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1119 02:42:56.348202  302848 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1119 02:42:56.348467  302848 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1119 02:42:56.489302  302848 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1119 02:42:56.489520  302848 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1119 02:42:57.490788  302848 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001707522s
	I1119 02:42:57.494204  302848 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1119 02:42:57.494338  302848 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1119 02:42:57.494504  302848 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1119 02:42:57.494636  302848 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1119 02:42:56.501424  299668 out.go:252]   - Generating certificates and keys ...
	I1119 02:42:56.501541  299668 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1119 02:42:56.501670  299668 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1119 02:42:56.649197  299668 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1119 02:42:57.131296  299668 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1119 02:42:57.360417  299668 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1119 02:42:57.537498  299668 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1119 02:42:57.630421  299668 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1119 02:42:57.630669  299668 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-837474] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1119 02:42:57.690142  299668 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1119 02:42:57.692964  299668 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-837474] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1119 02:42:58.271962  299668 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1119 02:42:58.474942  299668 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1119 02:42:58.759980  299668 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1119 02:42:58.760242  299668 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1119 02:42:59.509507  299668 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1119 02:42:56.077921  291163 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-987573" context rescaled to 1 replicas
	W1119 02:42:57.695200  291163 node_ready.go:57] node "old-k8s-version-987573" has "Ready":"False" status (will retry)
	W1119 02:43:00.073408  291163 node_ready.go:57] node "old-k8s-version-987573" has "Ready":"False" status (will retry)
	I1119 02:42:59.510574  302848 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.016320605s
	I1119 02:43:00.061250  302848 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.566984077s
	I1119 02:43:00.995299  302848 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501086445s
	I1119 02:43:01.005851  302848 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1119 02:43:01.015707  302848 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1119 02:43:01.023229  302848 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1119 02:43:01.023570  302848 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-811173 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1119 02:43:01.031334  302848 kubeadm.go:319] [bootstrap-token] Using token: 7mjhrd.yzq9kll5v9huaptf
	I1119 02:43:00.399900  299668 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1119 02:43:01.316795  299668 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1119 02:43:01.487746  299668 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1119 02:43:01.585498  299668 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1119 02:43:01.586110  299668 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1119 02:43:01.590136  299668 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1119 02:42:57.204524  306860 out.go:252]   - Generating certificates and keys ...
	I1119 02:42:57.204623  306860 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1119 02:42:57.204687  306860 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1119 02:42:57.340602  306860 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1119 02:42:57.763784  306860 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1119 02:42:58.132475  306860 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1119 02:42:58.496067  306860 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1119 02:42:59.065287  306860 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1119 02:42:59.065574  306860 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-167150 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1119 02:42:59.997463  306860 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1119 02:42:59.997634  306860 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-167150 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1119 02:43:00.551535  306860 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1119 02:43:00.590706  306860 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1119 02:43:00.670505  306860 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1119 02:43:00.670748  306860 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1119 02:43:00.836954  306860 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1119 02:43:00.975878  306860 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1119 02:43:01.234661  306860 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1119 02:43:01.776990  306860 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1119 02:43:01.935581  306860 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1119 02:43:01.936081  306860 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1119 02:43:01.939514  306860 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1119 02:43:01.942389  306860 out.go:252]   - Booting up control plane ...
	I1119 02:43:01.942532  306860 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1119 02:43:01.942649  306860 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1119 02:43:01.942759  306860 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1119 02:43:01.957695  306860 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1119 02:43:01.957851  306860 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1119 02:43:01.964809  306860 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1119 02:43:01.966421  306860 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1119 02:43:01.966510  306860 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1119 02:43:02.081897  306860 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1119 02:43:02.082048  306860 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1119 02:43:01.032638  302848 out.go:252]   - Configuring RBAC rules ...
	I1119 02:43:01.032800  302848 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1119 02:43:01.035624  302848 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1119 02:43:01.040457  302848 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1119 02:43:01.043182  302848 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1119 02:43:01.045472  302848 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1119 02:43:01.048002  302848 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1119 02:43:01.401444  302848 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1119 02:43:01.820457  302848 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1119 02:43:02.401502  302848 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1119 02:43:02.402643  302848 kubeadm.go:319] 
	I1119 02:43:02.402737  302848 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1119 02:43:02.402774  302848 kubeadm.go:319] 
	I1119 02:43:02.402905  302848 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1119 02:43:02.402932  302848 kubeadm.go:319] 
	I1119 02:43:02.402964  302848 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1119 02:43:02.403044  302848 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1119 02:43:02.403131  302848 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1119 02:43:02.403146  302848 kubeadm.go:319] 
	I1119 02:43:02.403216  302848 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1119 02:43:02.403225  302848 kubeadm.go:319] 
	I1119 02:43:02.403289  302848 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1119 02:43:02.403297  302848 kubeadm.go:319] 
	I1119 02:43:02.403367  302848 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1119 02:43:02.403490  302848 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1119 02:43:02.403605  302848 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1119 02:43:02.403613  302848 kubeadm.go:319] 
	I1119 02:43:02.403712  302848 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1119 02:43:02.403838  302848 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1119 02:43:02.403855  302848 kubeadm.go:319] 
	I1119 02:43:02.403968  302848 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 7mjhrd.yzq9kll5v9huaptf \
	I1119 02:43:02.404116  302848 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4165d7789cb5b41e2fdbecffedd9aece71a7c390b7f673c978e979d9f1b4ab32 \
	I1119 02:43:02.404149  302848 kubeadm.go:319] 	--control-plane 
	I1119 02:43:02.404153  302848 kubeadm.go:319] 
	I1119 02:43:02.404265  302848 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1119 02:43:02.404277  302848 kubeadm.go:319] 
	I1119 02:43:02.404388  302848 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 7mjhrd.yzq9kll5v9huaptf \
	I1119 02:43:02.404566  302848 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4165d7789cb5b41e2fdbecffedd9aece71a7c390b7f673c978e979d9f1b4ab32 
	I1119 02:43:02.407773  302848 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1119 02:43:02.407946  302848 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1119 02:43:02.407964  302848 cni.go:84] Creating CNI manager for ""
	I1119 02:43:02.407972  302848 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 02:43:02.410242  302848 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1119 02:43:02.411389  302848 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1119 02:43:02.416029  302848 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1119 02:43:02.416045  302848 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1119 02:43:02.434391  302848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1119 02:43:02.635779  302848 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1119 02:43:02.635869  302848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:43:02.635895  302848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-811173 minikube.k8s.io/updated_at=2025_11_19T02_43_02_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=ebd49efe8b7d44f89fc96e21beffcd64dfee8277 minikube.k8s.io/name=embed-certs-811173 minikube.k8s.io/primary=true
	I1119 02:43:02.646141  302848 ops.go:34] apiserver oom_adj: -16
	I1119 02:43:02.701476  302848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:43:03.201546  302848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:43:03.701526  302848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:43:01.593467  299668 out.go:252]   - Booting up control plane ...
	I1119 02:43:01.593615  299668 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1119 02:43:01.593731  299668 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1119 02:43:01.593821  299668 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1119 02:43:01.609953  299668 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1119 02:43:01.610136  299668 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1119 02:43:01.617306  299668 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1119 02:43:01.617705  299668 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1119 02:43:01.617773  299668 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1119 02:43:01.745744  299668 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1119 02:43:01.745917  299668 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1119 02:43:02.749850  299668 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.00218898s
	I1119 02:43:02.753994  299668 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1119 02:43:02.754137  299668 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1119 02:43:02.754320  299668 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1119 02:43:02.754458  299668 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1119 02:43:04.243363  299668 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.489187962s
	I1119 02:43:05.042174  299668 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.288115678s
	W1119 02:43:02.073659  291163 node_ready.go:57] node "old-k8s-version-987573" has "Ready":"False" status (will retry)
	W1119 02:43:04.075000  291163 node_ready.go:57] node "old-k8s-version-987573" has "Ready":"False" status (will retry)
	I1119 02:43:06.755785  299668 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.00176291s
	I1119 02:43:06.768184  299668 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1119 02:43:06.778618  299668 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1119 02:43:06.786476  299668 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1119 02:43:06.786680  299668 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-837474 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1119 02:43:06.793671  299668 kubeadm.go:319] [bootstrap-token] Using token: 9fycjj.9ujoqc3x92l2ibft
	I1119 02:43:02.583638  306860 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.872402ms
	I1119 02:43:02.588260  306860 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1119 02:43:02.588375  306860 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8444/livez
	I1119 02:43:02.588528  306860 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1119 02:43:02.588631  306860 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1119 02:43:04.140696  306860 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.552291183s
	I1119 02:43:05.150994  306860 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.562670686s
	I1119 02:43:07.089548  306860 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.501158948s
	I1119 02:43:07.101719  306860 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1119 02:43:07.110570  306860 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1119 02:43:07.118309  306860 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1119 02:43:07.118633  306860 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-167150 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1119 02:43:07.128002  306860 kubeadm.go:319] [bootstrap-token] Using token: waagng.bgqyeddkg8xbkifv
	I1119 02:43:07.129465  306860 out.go:252]   - Configuring RBAC rules ...
	I1119 02:43:07.129641  306860 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1119 02:43:07.132357  306860 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1119 02:43:07.138676  306860 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1119 02:43:07.142447  306860 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1119 02:43:07.143596  306860 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1119 02:43:07.145985  306860 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1119 02:43:04.202036  302848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:43:04.702118  302848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:43:05.201577  302848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:43:05.702195  302848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:43:06.202066  302848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:43:06.701602  302848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:43:07.202550  302848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:43:07.292809  302848 kubeadm.go:1114] duration metric: took 4.657004001s to wait for elevateKubeSystemPrivileges
	I1119 02:43:07.292851  302848 kubeadm.go:403] duration metric: took 15.846648283s to StartCluster
	I1119 02:43:07.292874  302848 settings.go:142] acquiring lock: {Name:mk39bd61177f2f8808b442f427cccc3032092975 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:43:07.292952  302848 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21924-11126/kubeconfig
	I1119 02:43:07.294786  302848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/kubeconfig: {Name:mkb0d0ec188d51add3fa67ae624c9c11581068d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:43:07.295068  302848 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 02:43:07.295192  302848 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 02:43:07.295259  302848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1119 02:43:07.295275  302848 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-811173"
	I1119 02:43:07.295295  302848 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-811173"
	I1119 02:43:07.295325  302848 host.go:66] Checking if "embed-certs-811173" exists ...
	I1119 02:43:07.295866  302848 addons.go:70] Setting default-storageclass=true in profile "embed-certs-811173"
	I1119 02:43:07.295887  302848 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-811173"
	I1119 02:43:07.295930  302848 cli_runner.go:164] Run: docker container inspect embed-certs-811173 --format={{.State.Status}}
	I1119 02:43:07.296292  302848 config.go:182] Loaded profile config "embed-certs-811173": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:43:07.296344  302848 cli_runner.go:164] Run: docker container inspect embed-certs-811173 --format={{.State.Status}}
	I1119 02:43:07.297705  302848 out.go:179] * Verifying Kubernetes components...
	I1119 02:43:07.299117  302848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:43:07.331934  302848 addons.go:239] Setting addon default-storageclass=true in "embed-certs-811173"
	I1119 02:43:07.331974  302848 host.go:66] Checking if "embed-certs-811173" exists ...
	I1119 02:43:07.332295  302848 cli_runner.go:164] Run: docker container inspect embed-certs-811173 --format={{.State.Status}}
	I1119 02:43:07.332844  302848 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 02:43:07.334167  302848 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 02:43:07.334188  302848 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 02:43:07.334241  302848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-811173
	I1119 02:43:07.362524  302848 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 02:43:07.362762  302848 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 02:43:07.362850  302848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-811173
	I1119 02:43:07.364663  302848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/embed-certs-811173/id_rsa Username:docker}
	I1119 02:43:07.388411  302848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/embed-certs-811173/id_rsa Username:docker}
	I1119 02:43:07.411165  302848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1119 02:43:07.483920  302848 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 02:43:07.503288  302848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 02:43:07.513295  302848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 02:43:07.651779  302848 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1119 02:43:07.654104  302848 node_ready.go:35] waiting up to 6m0s for node "embed-certs-811173" to be "Ready" ...
	I1119 02:43:07.881305  302848 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1119 02:43:06.795001  299668 out.go:252]   - Configuring RBAC rules ...
	I1119 02:43:06.795151  299668 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1119 02:43:06.797762  299668 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1119 02:43:06.802768  299668 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1119 02:43:06.805038  299668 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1119 02:43:06.807078  299668 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1119 02:43:06.809131  299668 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1119 02:43:07.162003  299668 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1119 02:43:07.591067  299668 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1119 02:43:08.161713  299668 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1119 02:43:08.162667  299668 kubeadm.go:319] 
	I1119 02:43:08.162773  299668 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1119 02:43:08.162792  299668 kubeadm.go:319] 
	I1119 02:43:08.162919  299668 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1119 02:43:08.162929  299668 kubeadm.go:319] 
	I1119 02:43:08.162968  299668 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1119 02:43:08.163054  299668 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1119 02:43:08.163127  299668 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1119 02:43:08.163135  299668 kubeadm.go:319] 
	I1119 02:43:08.163218  299668 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1119 02:43:08.163232  299668 kubeadm.go:319] 
	I1119 02:43:08.163270  299668 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1119 02:43:08.163276  299668 kubeadm.go:319] 
	I1119 02:43:08.163318  299668 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1119 02:43:08.163382  299668 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1119 02:43:08.163483  299668 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1119 02:43:08.163500  299668 kubeadm.go:319] 
	I1119 02:43:08.163615  299668 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1119 02:43:08.163733  299668 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1119 02:43:08.163746  299668 kubeadm.go:319] 
	I1119 02:43:08.163885  299668 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 9fycjj.9ujoqc3x92l2ibft \
	I1119 02:43:08.164006  299668 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4165d7789cb5b41e2fdbecffedd9aece71a7c390b7f673c978e979d9f1b4ab32 \
	I1119 02:43:08.164041  299668 kubeadm.go:319] 	--control-plane 
	I1119 02:43:08.164050  299668 kubeadm.go:319] 
	I1119 02:43:08.164194  299668 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1119 02:43:08.164206  299668 kubeadm.go:319] 
	I1119 02:43:08.164311  299668 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 9fycjj.9ujoqc3x92l2ibft \
	I1119 02:43:08.164401  299668 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4165d7789cb5b41e2fdbecffedd9aece71a7c390b7f673c978e979d9f1b4ab32 
	I1119 02:43:08.166559  299668 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1119 02:43:08.166685  299668 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1119 02:43:08.166716  299668 cni.go:84] Creating CNI manager for ""
	I1119 02:43:08.166726  299668 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 02:43:08.169105  299668 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1119 02:43:07.495981  306860 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1119 02:43:07.914284  306860 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1119 02:43:08.495511  306860 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1119 02:43:08.496414  306860 kubeadm.go:319] 
	I1119 02:43:08.496519  306860 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1119 02:43:08.496532  306860 kubeadm.go:319] 
	I1119 02:43:08.496630  306860 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1119 02:43:08.496640  306860 kubeadm.go:319] 
	I1119 02:43:08.496692  306860 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1119 02:43:08.496819  306860 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1119 02:43:08.496900  306860 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1119 02:43:08.496910  306860 kubeadm.go:319] 
	I1119 02:43:08.497001  306860 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1119 02:43:08.497011  306860 kubeadm.go:319] 
	I1119 02:43:08.497081  306860 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1119 02:43:08.497091  306860 kubeadm.go:319] 
	I1119 02:43:08.497172  306860 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1119 02:43:08.497303  306860 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1119 02:43:08.497404  306860 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1119 02:43:08.497414  306860 kubeadm.go:319] 
	I1119 02:43:08.497561  306860 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1119 02:43:08.497664  306860 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1119 02:43:08.497674  306860 kubeadm.go:319] 
	I1119 02:43:08.497789  306860 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token waagng.bgqyeddkg8xbkifv \
	I1119 02:43:08.497949  306860 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4165d7789cb5b41e2fdbecffedd9aece71a7c390b7f673c978e979d9f1b4ab32 \
	I1119 02:43:08.497979  306860 kubeadm.go:319] 	--control-plane 
	I1119 02:43:08.497987  306860 kubeadm.go:319] 
	I1119 02:43:08.498113  306860 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1119 02:43:08.498121  306860 kubeadm.go:319] 
	I1119 02:43:08.498211  306860 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token waagng.bgqyeddkg8xbkifv \
	I1119 02:43:08.498313  306860 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4165d7789cb5b41e2fdbecffedd9aece71a7c390b7f673c978e979d9f1b4ab32 
	I1119 02:43:08.500938  306860 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1119 02:43:08.501038  306860 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1119 02:43:08.501062  306860 cni.go:84] Creating CNI manager for ""
	I1119 02:43:08.501071  306860 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 02:43:08.502415  306860 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1119 02:43:07.882405  302848 addons.go:515] duration metric: took 587.224612ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1119 02:43:08.155743  302848 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-811173" context rescaled to 1 replicas
	I1119 02:43:08.170011  299668 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1119 02:43:08.174308  299668 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1119 02:43:08.174323  299668 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1119 02:43:08.187641  299668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1119 02:43:08.394639  299668 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1119 02:43:08.394749  299668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:43:08.394806  299668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-837474 minikube.k8s.io/updated_at=2025_11_19T02_43_08_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=ebd49efe8b7d44f89fc96e21beffcd64dfee8277 minikube.k8s.io/name=no-preload-837474 minikube.k8s.io/primary=true
	I1119 02:43:08.404680  299668 ops.go:34] apiserver oom_adj: -16
	I1119 02:43:08.461588  299668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:43:08.962254  299668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:43:09.461759  299668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:43:09.961662  299668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1119 02:43:06.573722  291163 node_ready.go:57] node "old-k8s-version-987573" has "Ready":"False" status (will retry)
	I1119 02:43:08.072741  291163 node_ready.go:49] node "old-k8s-version-987573" is "Ready"
	I1119 02:43:08.072770  291163 node_ready.go:38] duration metric: took 12.502973194s for node "old-k8s-version-987573" to be "Ready" ...
	I1119 02:43:08.072782  291163 api_server.go:52] waiting for apiserver process to appear ...
	I1119 02:43:08.072824  291163 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 02:43:08.085646  291163 api_server.go:72] duration metric: took 13.03179653s to wait for apiserver process to appear ...
	I1119 02:43:08.085675  291163 api_server.go:88] waiting for apiserver healthz status ...
	I1119 02:43:08.085696  291163 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 02:43:08.090892  291163 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1119 02:43:08.091918  291163 api_server.go:141] control plane version: v1.28.0
	I1119 02:43:08.091942  291163 api_server.go:131] duration metric: took 6.259879ms to wait for apiserver health ...
	I1119 02:43:08.091952  291163 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 02:43:08.095373  291163 system_pods.go:59] 8 kube-system pods found
	I1119 02:43:08.095414  291163 system_pods.go:61] "coredns-5dd5756b68-djd8r" [38b8c793-304e-42c1-b2a0-ecd1032a5962] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:43:08.095426  291163 system_pods.go:61] "etcd-old-k8s-version-987573" [f7ae9d5c-07c4-402e-97ea-a50375ebc9c3] Running
	I1119 02:43:08.095449  291163 system_pods.go:61] "kindnet-57t4v" [0db2f280-bd80-4848-b27d-5419aa484d18] Running
	I1119 02:43:08.095455  291163 system_pods.go:61] "kube-apiserver-old-k8s-version-987573" [8da95707-6a07-4f17-a00c-63ec9cbe294d] Running
	I1119 02:43:08.095461  291163 system_pods.go:61] "kube-controller-manager-old-k8s-version-987573" [ea4139c7-6ced-429f-9eb9-9b2897fd679e] Running
	I1119 02:43:08.095466  291163 system_pods.go:61] "kube-proxy-tmqhk" [ef6bd301-05f1-4196-99a7-73e8ff59dc4b] Running
	I1119 02:43:08.095471  291163 system_pods.go:61] "kube-scheduler-old-k8s-version-987573" [85535e86-1e2a-4131-baec-893b97ce32e6] Running
	I1119 02:43:08.095478  291163 system_pods.go:61] "storage-provisioner" [abe94ba2-07c5-4f03-ab28-00ea277fdc56] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 02:43:08.095487  291163 system_pods.go:74] duration metric: took 3.527954ms to wait for pod list to return data ...
	I1119 02:43:08.095497  291163 default_sa.go:34] waiting for default service account to be created ...
	I1119 02:43:08.097407  291163 default_sa.go:45] found service account: "default"
	I1119 02:43:08.097424  291163 default_sa.go:55] duration metric: took 1.918195ms for default service account to be created ...
	I1119 02:43:08.097462  291163 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 02:43:08.100635  291163 system_pods.go:86] 8 kube-system pods found
	I1119 02:43:08.100659  291163 system_pods.go:89] "coredns-5dd5756b68-djd8r" [38b8c793-304e-42c1-b2a0-ecd1032a5962] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:43:08.100665  291163 system_pods.go:89] "etcd-old-k8s-version-987573" [f7ae9d5c-07c4-402e-97ea-a50375ebc9c3] Running
	I1119 02:43:08.100671  291163 system_pods.go:89] "kindnet-57t4v" [0db2f280-bd80-4848-b27d-5419aa484d18] Running
	I1119 02:43:08.100675  291163 system_pods.go:89] "kube-apiserver-old-k8s-version-987573" [8da95707-6a07-4f17-a00c-63ec9cbe294d] Running
	I1119 02:43:08.100681  291163 system_pods.go:89] "kube-controller-manager-old-k8s-version-987573" [ea4139c7-6ced-429f-9eb9-9b2897fd679e] Running
	I1119 02:43:08.100686  291163 system_pods.go:89] "kube-proxy-tmqhk" [ef6bd301-05f1-4196-99a7-73e8ff59dc4b] Running
	I1119 02:43:08.100696  291163 system_pods.go:89] "kube-scheduler-old-k8s-version-987573" [85535e86-1e2a-4131-baec-893b97ce32e6] Running
	I1119 02:43:08.100704  291163 system_pods.go:89] "storage-provisioner" [abe94ba2-07c5-4f03-ab28-00ea277fdc56] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 02:43:08.100731  291163 retry.go:31] will retry after 255.615466ms: missing components: kube-dns
	I1119 02:43:08.360951  291163 system_pods.go:86] 8 kube-system pods found
	I1119 02:43:08.360990  291163 system_pods.go:89] "coredns-5dd5756b68-djd8r" [38b8c793-304e-42c1-b2a0-ecd1032a5962] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:43:08.360999  291163 system_pods.go:89] "etcd-old-k8s-version-987573" [f7ae9d5c-07c4-402e-97ea-a50375ebc9c3] Running
	I1119 02:43:08.361007  291163 system_pods.go:89] "kindnet-57t4v" [0db2f280-bd80-4848-b27d-5419aa484d18] Running
	I1119 02:43:08.361012  291163 system_pods.go:89] "kube-apiserver-old-k8s-version-987573" [8da95707-6a07-4f17-a00c-63ec9cbe294d] Running
	I1119 02:43:08.361017  291163 system_pods.go:89] "kube-controller-manager-old-k8s-version-987573" [ea4139c7-6ced-429f-9eb9-9b2897fd679e] Running
	I1119 02:43:08.361022  291163 system_pods.go:89] "kube-proxy-tmqhk" [ef6bd301-05f1-4196-99a7-73e8ff59dc4b] Running
	I1119 02:43:08.361027  291163 system_pods.go:89] "kube-scheduler-old-k8s-version-987573" [85535e86-1e2a-4131-baec-893b97ce32e6] Running
	I1119 02:43:08.361034  291163 system_pods.go:89] "storage-provisioner" [abe94ba2-07c5-4f03-ab28-00ea277fdc56] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 02:43:08.361058  291163 retry.go:31] will retry after 283.051609ms: missing components: kube-dns
	I1119 02:43:08.649105  291163 system_pods.go:86] 8 kube-system pods found
	I1119 02:43:08.649146  291163 system_pods.go:89] "coredns-5dd5756b68-djd8r" [38b8c793-304e-42c1-b2a0-ecd1032a5962] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:43:08.649155  291163 system_pods.go:89] "etcd-old-k8s-version-987573" [f7ae9d5c-07c4-402e-97ea-a50375ebc9c3] Running
	I1119 02:43:08.649163  291163 system_pods.go:89] "kindnet-57t4v" [0db2f280-bd80-4848-b27d-5419aa484d18] Running
	I1119 02:43:08.649177  291163 system_pods.go:89] "kube-apiserver-old-k8s-version-987573" [8da95707-6a07-4f17-a00c-63ec9cbe294d] Running
	I1119 02:43:08.649183  291163 system_pods.go:89] "kube-controller-manager-old-k8s-version-987573" [ea4139c7-6ced-429f-9eb9-9b2897fd679e] Running
	I1119 02:43:08.649189  291163 system_pods.go:89] "kube-proxy-tmqhk" [ef6bd301-05f1-4196-99a7-73e8ff59dc4b] Running
	I1119 02:43:08.649194  291163 system_pods.go:89] "kube-scheduler-old-k8s-version-987573" [85535e86-1e2a-4131-baec-893b97ce32e6] Running
	I1119 02:43:08.649201  291163 system_pods.go:89] "storage-provisioner" [abe94ba2-07c5-4f03-ab28-00ea277fdc56] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 02:43:08.649222  291163 retry.go:31] will retry after 437.362391ms: missing components: kube-dns
	I1119 02:43:09.091273  291163 system_pods.go:86] 8 kube-system pods found
	I1119 02:43:09.091310  291163 system_pods.go:89] "coredns-5dd5756b68-djd8r" [38b8c793-304e-42c1-b2a0-ecd1032a5962] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:43:09.091322  291163 system_pods.go:89] "etcd-old-k8s-version-987573" [f7ae9d5c-07c4-402e-97ea-a50375ebc9c3] Running
	I1119 02:43:09.091328  291163 system_pods.go:89] "kindnet-57t4v" [0db2f280-bd80-4848-b27d-5419aa484d18] Running
	I1119 02:43:09.091332  291163 system_pods.go:89] "kube-apiserver-old-k8s-version-987573" [8da95707-6a07-4f17-a00c-63ec9cbe294d] Running
	I1119 02:43:09.091336  291163 system_pods.go:89] "kube-controller-manager-old-k8s-version-987573" [ea4139c7-6ced-429f-9eb9-9b2897fd679e] Running
	I1119 02:43:09.091339  291163 system_pods.go:89] "kube-proxy-tmqhk" [ef6bd301-05f1-4196-99a7-73e8ff59dc4b] Running
	I1119 02:43:09.091342  291163 system_pods.go:89] "kube-scheduler-old-k8s-version-987573" [85535e86-1e2a-4131-baec-893b97ce32e6] Running
	I1119 02:43:09.091347  291163 system_pods.go:89] "storage-provisioner" [abe94ba2-07c5-4f03-ab28-00ea277fdc56] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 02:43:09.091360  291163 retry.go:31] will retry after 557.694848ms: missing components: kube-dns
	I1119 02:43:09.654831  291163 system_pods.go:86] 8 kube-system pods found
	I1119 02:43:09.654864  291163 system_pods.go:89] "coredns-5dd5756b68-djd8r" [38b8c793-304e-42c1-b2a0-ecd1032a5962] Running
	I1119 02:43:09.654874  291163 system_pods.go:89] "etcd-old-k8s-version-987573" [f7ae9d5c-07c4-402e-97ea-a50375ebc9c3] Running
	I1119 02:43:09.654880  291163 system_pods.go:89] "kindnet-57t4v" [0db2f280-bd80-4848-b27d-5419aa484d18] Running
	I1119 02:43:09.654887  291163 system_pods.go:89] "kube-apiserver-old-k8s-version-987573" [8da95707-6a07-4f17-a00c-63ec9cbe294d] Running
	I1119 02:43:09.654892  291163 system_pods.go:89] "kube-controller-manager-old-k8s-version-987573" [ea4139c7-6ced-429f-9eb9-9b2897fd679e] Running
	I1119 02:43:09.654897  291163 system_pods.go:89] "kube-proxy-tmqhk" [ef6bd301-05f1-4196-99a7-73e8ff59dc4b] Running
	I1119 02:43:09.654902  291163 system_pods.go:89] "kube-scheduler-old-k8s-version-987573" [85535e86-1e2a-4131-baec-893b97ce32e6] Running
	I1119 02:43:09.654907  291163 system_pods.go:89] "storage-provisioner" [abe94ba2-07c5-4f03-ab28-00ea277fdc56] Running
	I1119 02:43:09.654917  291163 system_pods.go:126] duration metric: took 1.55744718s to wait for k8s-apps to be running ...
	I1119 02:43:09.654931  291163 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 02:43:09.654989  291163 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 02:43:09.668526  291163 system_svc.go:56] duration metric: took 13.587992ms WaitForService to wait for kubelet
	I1119 02:43:09.668557  291163 kubeadm.go:587] duration metric: took 14.614710886s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 02:43:09.668577  291163 node_conditions.go:102] verifying NodePressure condition ...
	I1119 02:43:09.671058  291163 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1119 02:43:09.671080  291163 node_conditions.go:123] node cpu capacity is 8
	I1119 02:43:09.671094  291163 node_conditions.go:105] duration metric: took 2.511044ms to run NodePressure ...
	I1119 02:43:09.671108  291163 start.go:242] waiting for startup goroutines ...
	I1119 02:43:09.671122  291163 start.go:247] waiting for cluster config update ...
	I1119 02:43:09.671138  291163 start.go:256] writing updated cluster config ...
	I1119 02:43:09.671426  291163 ssh_runner.go:195] Run: rm -f paused
	I1119 02:43:09.675339  291163 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 02:43:09.679685  291163 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-djd8r" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:09.683447  291163 pod_ready.go:94] pod "coredns-5dd5756b68-djd8r" is "Ready"
	I1119 02:43:09.683468  291163 pod_ready.go:86] duration metric: took 3.760218ms for pod "coredns-5dd5756b68-djd8r" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:09.686154  291163 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-987573" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:09.690031  291163 pod_ready.go:94] pod "etcd-old-k8s-version-987573" is "Ready"
	I1119 02:43:09.690049  291163 pod_ready.go:86] duration metric: took 3.878026ms for pod "etcd-old-k8s-version-987573" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:09.692504  291163 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-987573" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:09.695894  291163 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-987573" is "Ready"
	I1119 02:43:09.695913  291163 pod_ready.go:86] duration metric: took 3.39096ms for pod "kube-apiserver-old-k8s-version-987573" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:09.700042  291163 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-987573" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:10.080305  291163 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-987573" is "Ready"
	I1119 02:43:10.080330  291163 pod_ready.go:86] duration metric: took 380.2693ms for pod "kube-controller-manager-old-k8s-version-987573" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:10.279834  291163 pod_ready.go:83] waiting for pod "kube-proxy-tmqhk" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:10.679358  291163 pod_ready.go:94] pod "kube-proxy-tmqhk" is "Ready"
	I1119 02:43:10.679390  291163 pod_ready.go:86] duration metric: took 399.530656ms for pod "kube-proxy-tmqhk" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:10.880413  291163 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-987573" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:11.279416  291163 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-987573" is "Ready"
	I1119 02:43:11.279469  291163 pod_ready.go:86] duration metric: took 399.023354ms for pod "kube-scheduler-old-k8s-version-987573" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:11.279484  291163 pod_ready.go:40] duration metric: took 1.604115977s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 02:43:11.320952  291163 start.go:628] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1119 02:43:11.322818  291163 out.go:203] 
	W1119 02:43:11.324015  291163 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1119 02:43:11.325253  291163 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1119 02:43:11.326753  291163 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-987573" cluster and "default" namespace by default
	I1119 02:43:08.503687  306860 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1119 02:43:08.508285  306860 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1119 02:43:08.508302  306860 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1119 02:43:08.523707  306860 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1119 02:43:08.769348  306860 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1119 02:43:08.769426  306860 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:43:08.769484  306860 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-167150 minikube.k8s.io/updated_at=2025_11_19T02_43_08_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=ebd49efe8b7d44f89fc96e21beffcd64dfee8277 minikube.k8s.io/name=default-k8s-diff-port-167150 minikube.k8s.io/primary=true
	I1119 02:43:08.779644  306860 ops.go:34] apiserver oom_adj: -16
	I1119 02:43:08.864308  306860 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:43:09.364395  306860 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:43:09.865330  306860 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:43:10.364616  306860 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:43:10.864703  306860 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:43:11.364553  306860 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:43:11.864420  306860 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:43:12.365307  306860 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:43:12.440810  306860 kubeadm.go:1114] duration metric: took 3.671440647s to wait for elevateKubeSystemPrivileges
	I1119 02:43:12.440859  306860 kubeadm.go:403] duration metric: took 15.532850823s to StartCluster
	I1119 02:43:12.440882  306860 settings.go:142] acquiring lock: {Name:mk39bd61177f2f8808b442f427cccc3032092975 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:43:12.440962  306860 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21924-11126/kubeconfig
	I1119 02:43:12.443128  306860 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/kubeconfig: {Name:mkb0d0ec188d51add3fa67ae624c9c11581068d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:43:12.443390  306860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1119 02:43:12.443402  306860 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 02:43:12.443617  306860 config.go:182] Loaded profile config "default-k8s-diff-port-167150": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:43:12.443467  306860 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 02:43:12.443670  306860 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-167150"
	I1119 02:43:12.443679  306860 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-167150"
	I1119 02:43:12.443697  306860 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-167150"
	I1119 02:43:12.443697  306860 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-167150"
	I1119 02:43:12.443736  306860 host.go:66] Checking if "default-k8s-diff-port-167150" exists ...
	I1119 02:43:12.444076  306860 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-167150 --format={{.State.Status}}
	I1119 02:43:12.444253  306860 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-167150 --format={{.State.Status}}
	I1119 02:43:12.446396  306860 out.go:179] * Verifying Kubernetes components...
	I1119 02:43:12.447600  306860 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:43:12.470366  306860 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 02:43:12.471033  306860 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-167150"
	I1119 02:43:12.471078  306860 host.go:66] Checking if "default-k8s-diff-port-167150" exists ...
	I1119 02:43:12.471574  306860 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-167150 --format={{.State.Status}}
	I1119 02:43:12.472766  306860 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 02:43:12.472818  306860 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 02:43:12.472877  306860 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-167150
	I1119 02:43:12.503314  306860 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/default-k8s-diff-port-167150/id_rsa Username:docker}
	I1119 02:43:12.503591  306860 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 02:43:12.503615  306860 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 02:43:12.503672  306860 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-167150
	I1119 02:43:12.534100  306860 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/default-k8s-diff-port-167150/id_rsa Username:docker}
	I1119 02:43:12.556628  306860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1119 02:43:12.606106  306860 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 02:43:12.623922  306860 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 02:43:12.650781  306860 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 02:43:12.727240  306860 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1119 02:43:12.728708  306860 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-167150" to be "Ready" ...
	I1119 02:43:12.921283  306860 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1119 02:43:10.461847  299668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:43:10.962221  299668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:43:11.462998  299668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:43:11.962639  299668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:43:12.462654  299668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:43:12.962592  299668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:43:13.462281  299668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:43:13.526012  299668 kubeadm.go:1114] duration metric: took 5.131316482s to wait for elevateKubeSystemPrivileges
	I1119 02:43:13.526050  299668 kubeadm.go:403] duration metric: took 17.368991046s to StartCluster
	I1119 02:43:13.526070  299668 settings.go:142] acquiring lock: {Name:mk39bd61177f2f8808b442f427cccc3032092975 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:43:13.526144  299668 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21924-11126/kubeconfig
	I1119 02:43:13.528869  299668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/kubeconfig: {Name:mkb0d0ec188d51add3fa67ae624c9c11581068d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:43:13.529152  299668 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1119 02:43:13.529178  299668 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 02:43:13.529221  299668 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 02:43:13.529318  299668 addons.go:70] Setting storage-provisioner=true in profile "no-preload-837474"
	I1119 02:43:13.529340  299668 addons.go:239] Setting addon storage-provisioner=true in "no-preload-837474"
	I1119 02:43:13.529340  299668 addons.go:70] Setting default-storageclass=true in profile "no-preload-837474"
	I1119 02:43:13.529365  299668 config.go:182] Loaded profile config "no-preload-837474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:43:13.529370  299668 host.go:66] Checking if "no-preload-837474" exists ...
	I1119 02:43:13.529375  299668 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-837474"
	I1119 02:43:13.529859  299668 cli_runner.go:164] Run: docker container inspect no-preload-837474 --format={{.State.Status}}
	I1119 02:43:13.530016  299668 cli_runner.go:164] Run: docker container inspect no-preload-837474 --format={{.State.Status}}
	I1119 02:43:13.530719  299668 out.go:179] * Verifying Kubernetes components...
	I1119 02:43:13.531956  299668 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:43:13.553148  299668 addons.go:239] Setting addon default-storageclass=true in "no-preload-837474"
	I1119 02:43:13.553192  299668 host.go:66] Checking if "no-preload-837474" exists ...
	I1119 02:43:13.553734  299668 cli_runner.go:164] Run: docker container inspect no-preload-837474 --format={{.State.Status}}
	I1119 02:43:13.555218  299668 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 02:43:13.556409  299668 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 02:43:13.556465  299668 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 02:43:13.556515  299668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-837474
	I1119 02:43:13.581067  299668 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 02:43:13.581088  299668 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 02:43:13.581147  299668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-837474
	I1119 02:43:13.587309  299668 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/no-preload-837474/id_rsa Username:docker}
	I1119 02:43:13.603773  299668 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/no-preload-837474/id_rsa Username:docker}
	I1119 02:43:13.616042  299668 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1119 02:43:13.662733  299668 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 02:43:13.696898  299668 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 02:43:13.712155  299668 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 02:43:13.803707  299668 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1119 02:43:13.805528  299668 node_ready.go:35] waiting up to 6m0s for node "no-preload-837474" to be "Ready" ...
	I1119 02:43:14.021090  299668 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1119 02:43:09.657354  302848 node_ready.go:57] node "embed-certs-811173" has "Ready":"False" status (will retry)
	W1119 02:43:12.157245  302848 node_ready.go:57] node "embed-certs-811173" has "Ready":"False" status (will retry)
	I1119 02:43:14.022184  299668 addons.go:515] duration metric: took 492.963117ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1119 02:43:14.308619  299668 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-837474" context rescaled to 1 replicas
	I1119 02:43:12.922563  306860 addons.go:515] duration metric: took 479.097332ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1119 02:43:13.231221  306860 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-167150" context rescaled to 1 replicas
	W1119 02:43:14.732655  306860 node_ready.go:57] node "default-k8s-diff-port-167150" has "Ready":"False" status (will retry)
	W1119 02:43:14.157530  302848 node_ready.go:57] node "embed-certs-811173" has "Ready":"False" status (will retry)
	W1119 02:43:16.157612  302848 node_ready.go:57] node "embed-certs-811173" has "Ready":"False" status (will retry)
	I1119 02:43:18.657467  302848 node_ready.go:49] node "embed-certs-811173" is "Ready"
	I1119 02:43:18.657570  302848 node_ready.go:38] duration metric: took 11.003423276s for node "embed-certs-811173" to be "Ready" ...
	I1119 02:43:18.657596  302848 api_server.go:52] waiting for apiserver process to appear ...
	I1119 02:43:18.657639  302848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 02:43:18.670551  302848 api_server.go:72] duration metric: took 11.375418064s to wait for apiserver process to appear ...
	I1119 02:43:18.670593  302848 api_server.go:88] waiting for apiserver healthz status ...
	I1119 02:43:18.670611  302848 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1119 02:43:18.675195  302848 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1119 02:43:18.676254  302848 api_server.go:141] control plane version: v1.34.1
	I1119 02:43:18.676282  302848 api_server.go:131] duration metric: took 5.680617ms to wait for apiserver health ...
	I1119 02:43:18.676292  302848 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 02:43:18.679796  302848 system_pods.go:59] 8 kube-system pods found
	I1119 02:43:18.679829  302848 system_pods.go:61] "coredns-66bc5c9577-6zqr2" [45763e00-8d07-4cd1-bc77-8131988ad187] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:43:18.679837  302848 system_pods.go:61] "etcd-embed-certs-811173" [aa91bc11-b985-43ed-bb19-226f47adb517] Running
	I1119 02:43:18.679843  302848 system_pods.go:61] "kindnet-b2w9g" [0c0429a0-c37c-4eae-befb-d496610e882c] Running
	I1119 02:43:18.679849  302848 system_pods.go:61] "kube-apiserver-embed-certs-811173" [85c9fc14-94db-4732-ad9c-53fdb27b0bb5] Running
	I1119 02:43:18.679860  302848 system_pods.go:61] "kube-controller-manager-embed-certs-811173" [9944e561-0ab9-496d-baac-8b99bf3d6149] Running
	I1119 02:43:18.679865  302848 system_pods.go:61] "kube-proxy-s5bzz" [cebbac1b-ff7a-4bdf-b337-ec0b3b320728] Running
	I1119 02:43:18.679873  302848 system_pods.go:61] "kube-scheduler-embed-certs-811173" [6c1f1974-3341-47e3-875d-e5ec0abd032c] Running
	I1119 02:43:18.679881  302848 system_pods.go:61] "storage-provisioner" [4b41d056-28d4-4b4a-b546-2fb8c76fe688] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 02:43:18.679892  302848 system_pods.go:74] duration metric: took 3.592078ms to wait for pod list to return data ...
	I1119 02:43:18.679903  302848 default_sa.go:34] waiting for default service account to be created ...
	I1119 02:43:18.682287  302848 default_sa.go:45] found service account: "default"
	I1119 02:43:18.682313  302848 default_sa.go:55] duration metric: took 2.403388ms for default service account to be created ...
	I1119 02:43:18.682323  302848 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 02:43:18.684915  302848 system_pods.go:86] 8 kube-system pods found
	I1119 02:43:18.684945  302848 system_pods.go:89] "coredns-66bc5c9577-6zqr2" [45763e00-8d07-4cd1-bc77-8131988ad187] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:43:18.684954  302848 system_pods.go:89] "etcd-embed-certs-811173" [aa91bc11-b985-43ed-bb19-226f47adb517] Running
	I1119 02:43:18.684965  302848 system_pods.go:89] "kindnet-b2w9g" [0c0429a0-c37c-4eae-befb-d496610e882c] Running
	I1119 02:43:18.684971  302848 system_pods.go:89] "kube-apiserver-embed-certs-811173" [85c9fc14-94db-4732-ad9c-53fdb27b0bb5] Running
	I1119 02:43:18.684980  302848 system_pods.go:89] "kube-controller-manager-embed-certs-811173" [9944e561-0ab9-496d-baac-8b99bf3d6149] Running
	I1119 02:43:18.684986  302848 system_pods.go:89] "kube-proxy-s5bzz" [cebbac1b-ff7a-4bdf-b337-ec0b3b320728] Running
	I1119 02:43:18.684993  302848 system_pods.go:89] "kube-scheduler-embed-certs-811173" [6c1f1974-3341-47e3-875d-e5ec0abd032c] Running
	I1119 02:43:18.685000  302848 system_pods.go:89] "storage-provisioner" [4b41d056-28d4-4b4a-b546-2fb8c76fe688] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 02:43:18.685025  302848 retry.go:31] will retry after 210.702103ms: missing components: kube-dns
	I1119 02:43:18.900340  302848 system_pods.go:86] 8 kube-system pods found
	I1119 02:43:18.900379  302848 system_pods.go:89] "coredns-66bc5c9577-6zqr2" [45763e00-8d07-4cd1-bc77-8131988ad187] Running
	I1119 02:43:18.900388  302848 system_pods.go:89] "etcd-embed-certs-811173" [aa91bc11-b985-43ed-bb19-226f47adb517] Running
	I1119 02:43:18.900394  302848 system_pods.go:89] "kindnet-b2w9g" [0c0429a0-c37c-4eae-befb-d496610e882c] Running
	I1119 02:43:18.900400  302848 system_pods.go:89] "kube-apiserver-embed-certs-811173" [85c9fc14-94db-4732-ad9c-53fdb27b0bb5] Running
	I1119 02:43:18.900410  302848 system_pods.go:89] "kube-controller-manager-embed-certs-811173" [9944e561-0ab9-496d-baac-8b99bf3d6149] Running
	I1119 02:43:18.900415  302848 system_pods.go:89] "kube-proxy-s5bzz" [cebbac1b-ff7a-4bdf-b337-ec0b3b320728] Running
	I1119 02:43:18.900424  302848 system_pods.go:89] "kube-scheduler-embed-certs-811173" [6c1f1974-3341-47e3-875d-e5ec0abd032c] Running
	I1119 02:43:18.900441  302848 system_pods.go:89] "storage-provisioner" [4b41d056-28d4-4b4a-b546-2fb8c76fe688] Running
	I1119 02:43:18.900455  302848 system_pods.go:126] duration metric: took 218.125466ms to wait for k8s-apps to be running ...
	I1119 02:43:18.900467  302848 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 02:43:18.900516  302848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 02:43:18.914258  302848 system_svc.go:56] duration metric: took 13.781732ms WaitForService to wait for kubelet
	I1119 02:43:18.914285  302848 kubeadm.go:587] duration metric: took 11.619154777s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 02:43:18.914308  302848 node_conditions.go:102] verifying NodePressure condition ...
	I1119 02:43:18.917624  302848 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1119 02:43:18.917653  302848 node_conditions.go:123] node cpu capacity is 8
	I1119 02:43:18.917668  302848 node_conditions.go:105] duration metric: took 3.351447ms to run NodePressure ...
	I1119 02:43:18.917682  302848 start.go:242] waiting for startup goroutines ...
	I1119 02:43:18.917691  302848 start.go:247] waiting for cluster config update ...
	I1119 02:43:18.917704  302848 start.go:256] writing updated cluster config ...
	I1119 02:43:18.918010  302848 ssh_runner.go:195] Run: rm -f paused
	I1119 02:43:18.922579  302848 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 02:43:18.927046  302848 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-6zqr2" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:18.932042  302848 pod_ready.go:94] pod "coredns-66bc5c9577-6zqr2" is "Ready"
	I1119 02:43:18.932062  302848 pod_ready.go:86] duration metric: took 4.995305ms for pod "coredns-66bc5c9577-6zqr2" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:18.934146  302848 pod_ready.go:83] waiting for pod "etcd-embed-certs-811173" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:18.938004  302848 pod_ready.go:94] pod "etcd-embed-certs-811173" is "Ready"
	I1119 02:43:18.938027  302848 pod_ready.go:86] duration metric: took 3.859982ms for pod "etcd-embed-certs-811173" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:18.939959  302848 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-811173" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:18.943426  302848 pod_ready.go:94] pod "kube-apiserver-embed-certs-811173" is "Ready"
	I1119 02:43:18.943477  302848 pod_ready.go:86] duration metric: took 3.498122ms for pod "kube-apiserver-embed-certs-811173" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:18.945295  302848 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-811173" in "kube-system" namespace to be "Ready" or be gone ...
	W1119 02:43:15.809292  299668 node_ready.go:57] node "no-preload-837474" has "Ready":"False" status (will retry)
	W1119 02:43:18.308758  299668 node_ready.go:57] node "no-preload-837474" has "Ready":"False" status (will retry)
	I1119 02:43:19.327493  302848 pod_ready.go:94] pod "kube-controller-manager-embed-certs-811173" is "Ready"
	I1119 02:43:19.327522  302848 pod_ready.go:86] duration metric: took 382.207661ms for pod "kube-controller-manager-embed-certs-811173" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:19.527541  302848 pod_ready.go:83] waiting for pod "kube-proxy-s5bzz" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:19.926760  302848 pod_ready.go:94] pod "kube-proxy-s5bzz" is "Ready"
	I1119 02:43:19.926788  302848 pod_ready.go:86] duration metric: took 399.218426ms for pod "kube-proxy-s5bzz" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:20.127073  302848 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-811173" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:20.527220  302848 pod_ready.go:94] pod "kube-scheduler-embed-certs-811173" is "Ready"
	I1119 02:43:20.527245  302848 pod_ready.go:86] duration metric: took 400.150902ms for pod "kube-scheduler-embed-certs-811173" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:20.527257  302848 pod_ready.go:40] duration metric: took 1.604655373s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 02:43:20.574829  302848 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1119 02:43:20.576692  302848 out.go:179] * Done! kubectl is now configured to use "embed-certs-811173" cluster and "default" namespace by default
	W1119 02:43:17.232129  306860 node_ready.go:57] node "default-k8s-diff-port-167150" has "Ready":"False" status (will retry)
	W1119 02:43:19.732303  306860 node_ready.go:57] node "default-k8s-diff-port-167150" has "Ready":"False" status (will retry)
	W1119 02:43:21.732557  306860 node_ready.go:57] node "default-k8s-diff-port-167150" has "Ready":"False" status (will retry)
	W1119 02:43:20.309649  299668 node_ready.go:57] node "no-preload-837474" has "Ready":"False" status (will retry)
	W1119 02:43:22.808418  299668 node_ready.go:57] node "no-preload-837474" has "Ready":"False" status (will retry)
	W1119 02:43:24.809115  299668 node_ready.go:57] node "no-preload-837474" has "Ready":"False" status (will retry)
	W1119 02:43:24.231545  306860 node_ready.go:57] node "default-k8s-diff-port-167150" has "Ready":"False" status (will retry)
	I1119 02:43:24.733458  306860 node_ready.go:49] node "default-k8s-diff-port-167150" is "Ready"
	I1119 02:43:24.733492  306860 node_ready.go:38] duration metric: took 12.004757465s for node "default-k8s-diff-port-167150" to be "Ready" ...
	I1119 02:43:24.733508  306860 api_server.go:52] waiting for apiserver process to appear ...
	I1119 02:43:24.733583  306860 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 02:43:24.752894  306860 api_server.go:72] duration metric: took 12.309451634s to wait for apiserver process to appear ...
	I1119 02:43:24.752923  306860 api_server.go:88] waiting for apiserver healthz status ...
	I1119 02:43:24.752947  306860 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I1119 02:43:24.757341  306860 api_server.go:279] https://192.168.94.2:8444/healthz returned 200:
	ok
	I1119 02:43:24.758286  306860 api_server.go:141] control plane version: v1.34.1
	I1119 02:43:24.758343  306860 api_server.go:131] duration metric: took 5.412493ms to wait for apiserver health ...
	I1119 02:43:24.758360  306860 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 02:43:24.764264  306860 system_pods.go:59] 8 kube-system pods found
	I1119 02:43:24.764302  306860 system_pods.go:61] "coredns-66bc5c9577-bht2q" [67eaa46f-0f14-47fe-b518-8fc2339ac090] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:43:24.764312  306860 system_pods.go:61] "etcd-default-k8s-diff-port-167150" [ac29d08c-2178-4113-8fe6-ea4363113e84] Running
	I1119 02:43:24.764317  306860 system_pods.go:61] "kindnet-rs6jh" [05ae880f-e69c-4513-b3ab-f76b85c4ac98] Running
	I1119 02:43:24.764321  306860 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-167150" [4d716863-a958-4aa4-ac71-8630c57c1676] Running
	I1119 02:43:24.764324  306860 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-167150" [5137eb7b-c71f-43be-a32f-908e744cb6c5] Running
	I1119 02:43:24.764328  306860 system_pods.go:61] "kube-proxy-8gl4n" [33cee4c4-dbb5-4bc2-becb-ef2654e266b0] Running
	I1119 02:43:24.764331  306860 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-167150" [abe76fdb-41ae-498d-93bd-05734e7bdc8a] Running
	I1119 02:43:24.764335  306860 system_pods.go:61] "storage-provisioner" [03ff5a52-b9d1-454f-ab4c-ca75268b32ef] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 02:43:24.764341  306860 system_pods.go:74] duration metric: took 5.975017ms to wait for pod list to return data ...
	I1119 02:43:24.764348  306860 default_sa.go:34] waiting for default service account to be created ...
	I1119 02:43:24.766502  306860 default_sa.go:45] found service account: "default"
	I1119 02:43:24.766524  306860 default_sa.go:55] duration metric: took 2.165771ms for default service account to be created ...
	I1119 02:43:24.766533  306860 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 02:43:24.865373  306860 system_pods.go:86] 8 kube-system pods found
	I1119 02:43:24.865426  306860 system_pods.go:89] "coredns-66bc5c9577-bht2q" [67eaa46f-0f14-47fe-b518-8fc2339ac090] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:43:24.865447  306860 system_pods.go:89] "etcd-default-k8s-diff-port-167150" [ac29d08c-2178-4113-8fe6-ea4363113e84] Running
	I1119 02:43:24.865457  306860 system_pods.go:89] "kindnet-rs6jh" [05ae880f-e69c-4513-b3ab-f76b85c4ac98] Running
	I1119 02:43:24.865479  306860 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-167150" [4d716863-a958-4aa4-ac71-8630c57c1676] Running
	I1119 02:43:24.865489  306860 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-167150" [5137eb7b-c71f-43be-a32f-908e744cb6c5] Running
	I1119 02:43:24.865495  306860 system_pods.go:89] "kube-proxy-8gl4n" [33cee4c4-dbb5-4bc2-becb-ef2654e266b0] Running
	I1119 02:43:24.865505  306860 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-167150" [abe76fdb-41ae-498d-93bd-05734e7bdc8a] Running
	I1119 02:43:24.865519  306860 system_pods.go:89] "storage-provisioner" [03ff5a52-b9d1-454f-ab4c-ca75268b32ef] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 02:43:24.865542  306860 retry.go:31] will retry after 194.79473ms: missing components: kube-dns
	I1119 02:43:25.064190  306860 system_pods.go:86] 8 kube-system pods found
	I1119 02:43:25.064221  306860 system_pods.go:89] "coredns-66bc5c9577-bht2q" [67eaa46f-0f14-47fe-b518-8fc2339ac090] Running
	I1119 02:43:25.064227  306860 system_pods.go:89] "etcd-default-k8s-diff-port-167150" [ac29d08c-2178-4113-8fe6-ea4363113e84] Running
	I1119 02:43:25.064232  306860 system_pods.go:89] "kindnet-rs6jh" [05ae880f-e69c-4513-b3ab-f76b85c4ac98] Running
	I1119 02:43:25.064235  306860 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-167150" [4d716863-a958-4aa4-ac71-8630c57c1676] Running
	I1119 02:43:25.064239  306860 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-167150" [5137eb7b-c71f-43be-a32f-908e744cb6c5] Running
	I1119 02:43:25.064242  306860 system_pods.go:89] "kube-proxy-8gl4n" [33cee4c4-dbb5-4bc2-becb-ef2654e266b0] Running
	I1119 02:43:25.064246  306860 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-167150" [abe76fdb-41ae-498d-93bd-05734e7bdc8a] Running
	I1119 02:43:25.064250  306860 system_pods.go:89] "storage-provisioner" [03ff5a52-b9d1-454f-ab4c-ca75268b32ef] Running
	I1119 02:43:25.064257  306860 system_pods.go:126] duration metric: took 297.719432ms to wait for k8s-apps to be running ...
	I1119 02:43:25.064266  306860 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 02:43:25.064303  306860 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 02:43:25.076907  306860 system_svc.go:56] duration metric: took 12.630218ms WaitForService to wait for kubelet
	I1119 02:43:25.076935  306860 kubeadm.go:587] duration metric: took 12.633502759s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 02:43:25.076960  306860 node_conditions.go:102] verifying NodePressure condition ...
	I1119 02:43:25.079481  306860 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1119 02:43:25.079502  306860 node_conditions.go:123] node cpu capacity is 8
	I1119 02:43:25.079515  306860 node_conditions.go:105] duration metric: took 2.549912ms to run NodePressure ...
	I1119 02:43:25.079525  306860 start.go:242] waiting for startup goroutines ...
	I1119 02:43:25.079531  306860 start.go:247] waiting for cluster config update ...
	I1119 02:43:25.079541  306860 start.go:256] writing updated cluster config ...
	I1119 02:43:25.079785  306860 ssh_runner.go:195] Run: rm -f paused
	I1119 02:43:25.083850  306860 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 02:43:25.087017  306860 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-bht2q" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:25.090686  306860 pod_ready.go:94] pod "coredns-66bc5c9577-bht2q" is "Ready"
	I1119 02:43:25.090707  306860 pod_ready.go:86] duration metric: took 3.667578ms for pod "coredns-66bc5c9577-bht2q" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:25.092373  306860 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-167150" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:25.095652  306860 pod_ready.go:94] pod "etcd-default-k8s-diff-port-167150" is "Ready"
	I1119 02:43:25.095668  306860 pod_ready.go:86] duration metric: took 3.276898ms for pod "etcd-default-k8s-diff-port-167150" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:25.097376  306860 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-167150" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:25.100915  306860 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-167150" is "Ready"
	I1119 02:43:25.100937  306860 pod_ready.go:86] duration metric: took 3.543197ms for pod "kube-apiserver-default-k8s-diff-port-167150" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:25.102634  306860 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-167150" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:25.487998  306860 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-167150" is "Ready"
	I1119 02:43:25.488025  306860 pod_ready.go:86] duration metric: took 385.369921ms for pod "kube-controller-manager-default-k8s-diff-port-167150" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:25.687758  306860 pod_ready.go:83] waiting for pod "kube-proxy-8gl4n" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:26.088008  306860 pod_ready.go:94] pod "kube-proxy-8gl4n" is "Ready"
	I1119 02:43:26.088034  306860 pod_ready.go:86] duration metric: took 400.250445ms for pod "kube-proxy-8gl4n" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:26.288481  306860 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-167150" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:26.688376  306860 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-167150" is "Ready"
	I1119 02:43:26.688410  306860 pod_ready.go:86] duration metric: took 399.899992ms for pod "kube-scheduler-default-k8s-diff-port-167150" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:26.688424  306860 pod_ready.go:40] duration metric: took 1.604546321s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 02:43:26.732044  306860 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1119 02:43:26.733844  306860 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-167150" cluster and "default" namespace by default
	I1119 02:43:26.808929  299668 node_ready.go:49] node "no-preload-837474" is "Ready"
	I1119 02:43:26.808957  299668 node_ready.go:38] duration metric: took 13.003405438s for node "no-preload-837474" to be "Ready" ...
	I1119 02:43:26.808970  299668 api_server.go:52] waiting for apiserver process to appear ...
	I1119 02:43:26.809032  299668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 02:43:26.822099  299668 api_server.go:72] duration metric: took 13.292881287s to wait for apiserver process to appear ...
	I1119 02:43:26.822122  299668 api_server.go:88] waiting for apiserver healthz status ...
	I1119 02:43:26.822137  299668 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1119 02:43:26.826385  299668 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1119 02:43:26.827293  299668 api_server.go:141] control plane version: v1.34.1
	I1119 02:43:26.827320  299668 api_server.go:131] duration metric: took 5.191845ms to wait for apiserver health ...
	I1119 02:43:26.827330  299668 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 02:43:26.831205  299668 system_pods.go:59] 8 kube-system pods found
	I1119 02:43:26.831251  299668 system_pods.go:61] "coredns-66bc5c9577-44bdr" [9ad0000a-752a-4a18-a649-dd63b3e638d9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:43:26.831265  299668 system_pods.go:61] "etcd-no-preload-837474" [66ccbb50-4995-4789-9bcc-97834e6635a4] Running
	I1119 02:43:26.831278  299668 system_pods.go:61] "kindnet-96d7l" [d8eb5197-7836-4ec2-9fe3-e6354983a150] Running
	I1119 02:43:26.831288  299668 system_pods.go:61] "kube-apiserver-no-preload-837474" [fa87d2c9-bf36-4d63-9093-91435507e9f9] Running
	I1119 02:43:26.831294  299668 system_pods.go:61] "kube-controller-manager-no-preload-837474" [c56b763c-5153-431e-9d68-848077ed8eff] Running
	I1119 02:43:26.831302  299668 system_pods.go:61] "kube-proxy-hmxzk" [0cf4c9ca-e9e0-4bcc-8a93-40ff2d54df4b] Running
	I1119 02:43:26.831306  299668 system_pods.go:61] "kube-scheduler-no-preload-837474" [0b5dd44b-f40f-4f7e-89e0-c67cc486a8d7] Running
	I1119 02:43:26.831311  299668 system_pods.go:61] "storage-provisioner" [7b82e1eb-4a04-4145-8163-28073775b6ed] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 02:43:26.831322  299668 system_pods.go:74] duration metric: took 3.984632ms to wait for pod list to return data ...
	I1119 02:43:26.831335  299668 default_sa.go:34] waiting for default service account to be created ...
	I1119 02:43:26.833608  299668 default_sa.go:45] found service account: "default"
	I1119 02:43:26.833631  299668 default_sa.go:55] duration metric: took 2.288778ms for default service account to be created ...
	I1119 02:43:26.833641  299668 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 02:43:26.836207  299668 system_pods.go:86] 8 kube-system pods found
	I1119 02:43:26.836235  299668 system_pods.go:89] "coredns-66bc5c9577-44bdr" [9ad0000a-752a-4a18-a649-dd63b3e638d9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:43:26.836242  299668 system_pods.go:89] "etcd-no-preload-837474" [66ccbb50-4995-4789-9bcc-97834e6635a4] Running
	I1119 02:43:26.836250  299668 system_pods.go:89] "kindnet-96d7l" [d8eb5197-7836-4ec2-9fe3-e6354983a150] Running
	I1119 02:43:26.836256  299668 system_pods.go:89] "kube-apiserver-no-preload-837474" [fa87d2c9-bf36-4d63-9093-91435507e9f9] Running
	I1119 02:43:26.836261  299668 system_pods.go:89] "kube-controller-manager-no-preload-837474" [c56b763c-5153-431e-9d68-848077ed8eff] Running
	I1119 02:43:26.836265  299668 system_pods.go:89] "kube-proxy-hmxzk" [0cf4c9ca-e9e0-4bcc-8a93-40ff2d54df4b] Running
	I1119 02:43:26.836270  299668 system_pods.go:89] "kube-scheduler-no-preload-837474" [0b5dd44b-f40f-4f7e-89e0-c67cc486a8d7] Running
	I1119 02:43:26.836282  299668 system_pods.go:89] "storage-provisioner" [7b82e1eb-4a04-4145-8163-28073775b6ed] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 02:43:26.836305  299668 retry.go:31] will retry after 310.458015ms: missing components: kube-dns
	I1119 02:43:27.151217  299668 system_pods.go:86] 8 kube-system pods found
	I1119 02:43:27.151249  299668 system_pods.go:89] "coredns-66bc5c9577-44bdr" [9ad0000a-752a-4a18-a649-dd63b3e638d9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:43:27.151255  299668 system_pods.go:89] "etcd-no-preload-837474" [66ccbb50-4995-4789-9bcc-97834e6635a4] Running
	I1119 02:43:27.151261  299668 system_pods.go:89] "kindnet-96d7l" [d8eb5197-7836-4ec2-9fe3-e6354983a150] Running
	I1119 02:43:27.151267  299668 system_pods.go:89] "kube-apiserver-no-preload-837474" [fa87d2c9-bf36-4d63-9093-91435507e9f9] Running
	I1119 02:43:27.151271  299668 system_pods.go:89] "kube-controller-manager-no-preload-837474" [c56b763c-5153-431e-9d68-848077ed8eff] Running
	I1119 02:43:27.151274  299668 system_pods.go:89] "kube-proxy-hmxzk" [0cf4c9ca-e9e0-4bcc-8a93-40ff2d54df4b] Running
	I1119 02:43:27.151277  299668 system_pods.go:89] "kube-scheduler-no-preload-837474" [0b5dd44b-f40f-4f7e-89e0-c67cc486a8d7] Running
	I1119 02:43:27.151281  299668 system_pods.go:89] "storage-provisioner" [7b82e1eb-4a04-4145-8163-28073775b6ed] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 02:43:27.151303  299668 retry.go:31] will retry after 292.653885ms: missing components: kube-dns
	I1119 02:43:27.447718  299668 system_pods.go:86] 8 kube-system pods found
	I1119 02:43:27.447757  299668 system_pods.go:89] "coredns-66bc5c9577-44bdr" [9ad0000a-752a-4a18-a649-dd63b3e638d9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:43:27.447765  299668 system_pods.go:89] "etcd-no-preload-837474" [66ccbb50-4995-4789-9bcc-97834e6635a4] Running
	I1119 02:43:27.447774  299668 system_pods.go:89] "kindnet-96d7l" [d8eb5197-7836-4ec2-9fe3-e6354983a150] Running
	I1119 02:43:27.447780  299668 system_pods.go:89] "kube-apiserver-no-preload-837474" [fa87d2c9-bf36-4d63-9093-91435507e9f9] Running
	I1119 02:43:27.447785  299668 system_pods.go:89] "kube-controller-manager-no-preload-837474" [c56b763c-5153-431e-9d68-848077ed8eff] Running
	I1119 02:43:27.447790  299668 system_pods.go:89] "kube-proxy-hmxzk" [0cf4c9ca-e9e0-4bcc-8a93-40ff2d54df4b] Running
	I1119 02:43:27.447795  299668 system_pods.go:89] "kube-scheduler-no-preload-837474" [0b5dd44b-f40f-4f7e-89e0-c67cc486a8d7] Running
	I1119 02:43:27.447802  299668 system_pods.go:89] "storage-provisioner" [7b82e1eb-4a04-4145-8163-28073775b6ed] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 02:43:27.447823  299668 retry.go:31] will retry after 477.100014ms: missing components: kube-dns
	I1119 02:43:27.929074  299668 system_pods.go:86] 8 kube-system pods found
	I1119 02:43:27.929107  299668 system_pods.go:89] "coredns-66bc5c9577-44bdr" [9ad0000a-752a-4a18-a649-dd63b3e638d9] Running
	I1119 02:43:27.929116  299668 system_pods.go:89] "etcd-no-preload-837474" [66ccbb50-4995-4789-9bcc-97834e6635a4] Running
	I1119 02:43:27.929120  299668 system_pods.go:89] "kindnet-96d7l" [d8eb5197-7836-4ec2-9fe3-e6354983a150] Running
	I1119 02:43:27.929125  299668 system_pods.go:89] "kube-apiserver-no-preload-837474" [fa87d2c9-bf36-4d63-9093-91435507e9f9] Running
	I1119 02:43:27.929132  299668 system_pods.go:89] "kube-controller-manager-no-preload-837474" [c56b763c-5153-431e-9d68-848077ed8eff] Running
	I1119 02:43:27.929145  299668 system_pods.go:89] "kube-proxy-hmxzk" [0cf4c9ca-e9e0-4bcc-8a93-40ff2d54df4b] Running
	I1119 02:43:27.929151  299668 system_pods.go:89] "kube-scheduler-no-preload-837474" [0b5dd44b-f40f-4f7e-89e0-c67cc486a8d7] Running
	I1119 02:43:27.929157  299668 system_pods.go:89] "storage-provisioner" [7b82e1eb-4a04-4145-8163-28073775b6ed] Running
	I1119 02:43:27.929167  299668 system_pods.go:126] duration metric: took 1.095519919s to wait for k8s-apps to be running ...
	I1119 02:43:27.929180  299668 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 02:43:27.929242  299668 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 02:43:27.942226  299668 system_svc.go:56] duration metric: took 13.038461ms WaitForService to wait for kubelet
	I1119 02:43:27.942253  299668 kubeadm.go:587] duration metric: took 14.41303862s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 02:43:27.942270  299668 node_conditions.go:102] verifying NodePressure condition ...
	I1119 02:43:27.944884  299668 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1119 02:43:27.944908  299668 node_conditions.go:123] node cpu capacity is 8
	I1119 02:43:27.944920  299668 node_conditions.go:105] duration metric: took 2.645423ms to run NodePressure ...
	I1119 02:43:27.944931  299668 start.go:242] waiting for startup goroutines ...
	I1119 02:43:27.944937  299668 start.go:247] waiting for cluster config update ...
	I1119 02:43:27.944954  299668 start.go:256] writing updated cluster config ...
	I1119 02:43:27.945183  299668 ssh_runner.go:195] Run: rm -f paused
	I1119 02:43:27.948968  299668 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 02:43:27.953145  299668 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-44bdr" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:27.956845  299668 pod_ready.go:94] pod "coredns-66bc5c9577-44bdr" is "Ready"
	I1119 02:43:27.956865  299668 pod_ready.go:86] duration metric: took 3.69361ms for pod "coredns-66bc5c9577-44bdr" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:27.958618  299668 pod_ready.go:83] waiting for pod "etcd-no-preload-837474" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:27.962171  299668 pod_ready.go:94] pod "etcd-no-preload-837474" is "Ready"
	I1119 02:43:27.962190  299668 pod_ready.go:86] duration metric: took 3.556006ms for pod "etcd-no-preload-837474" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:27.963852  299668 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-837474" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:27.967155  299668 pod_ready.go:94] pod "kube-apiserver-no-preload-837474" is "Ready"
	I1119 02:43:27.967173  299668 pod_ready.go:86] duration metric: took 3.303222ms for pod "kube-apiserver-no-preload-837474" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:27.968751  299668 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-837474" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:28.353157  299668 pod_ready.go:94] pod "kube-controller-manager-no-preload-837474" is "Ready"
	I1119 02:43:28.353187  299668 pod_ready.go:86] duration metric: took 384.418601ms for pod "kube-controller-manager-no-preload-837474" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:28.554616  299668 pod_ready.go:83] waiting for pod "kube-proxy-hmxzk" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:28.953298  299668 pod_ready.go:94] pod "kube-proxy-hmxzk" is "Ready"
	I1119 02:43:28.953321  299668 pod_ready.go:86] duration metric: took 398.677815ms for pod "kube-proxy-hmxzk" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:29.155015  299668 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-837474" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:29.554367  299668 pod_ready.go:94] pod "kube-scheduler-no-preload-837474" is "Ready"
	I1119 02:43:29.554410  299668 pod_ready.go:86] duration metric: took 399.368565ms for pod "kube-scheduler-no-preload-837474" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:29.554428  299668 pod_ready.go:40] duration metric: took 1.605427357s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 02:43:29.602134  299668 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1119 02:43:29.604554  299668 out.go:179] * Done! kubectl is now configured to use "no-preload-837474" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 19 02:43:24 default-k8s-diff-port-167150 crio[770]: time="2025-11-19T02:43:24.742082398Z" level=info msg="Starting container: db9670dae5d75546b1028a8df2680f07aff9bf2d3e545a0f677bf0f48fee48bc" id=f6206069-f44b-4452-9f3f-5c35507c4770 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 02:43:24 default-k8s-diff-port-167150 crio[770]: time="2025-11-19T02:43:24.744191782Z" level=info msg="Started container" PID=1822 containerID=db9670dae5d75546b1028a8df2680f07aff9bf2d3e545a0f677bf0f48fee48bc description=kube-system/coredns-66bc5c9577-bht2q/coredns id=f6206069-f44b-4452-9f3f-5c35507c4770 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8cf3ea61c7fedea6951b6d9b197819a51488147024e18d7d1312acff43e96d66
	Nov 19 02:43:27 default-k8s-diff-port-167150 crio[770]: time="2025-11-19T02:43:27.210254038Z" level=info msg="Running pod sandbox: default/busybox/POD" id=850bc819-a1e1-455f-855e-c87b083661b8 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 02:43:27 default-k8s-diff-port-167150 crio[770]: time="2025-11-19T02:43:27.21033659Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:43:27 default-k8s-diff-port-167150 crio[770]: time="2025-11-19T02:43:27.214957723Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:a9485b9962b459697f11cffd4de1381f39fa497ea02710ddc2ce150b112f4314 UID:08eabde5-9057-44c1-9c3d-ee7388fc4224 NetNS:/var/run/netns/c21f186e-aaf1-4209-a87f-31f1658ba915 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0008fe430}] Aliases:map[]}"
	Nov 19 02:43:27 default-k8s-diff-port-167150 crio[770]: time="2025-11-19T02:43:27.214989128Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 19 02:43:27 default-k8s-diff-port-167150 crio[770]: time="2025-11-19T02:43:27.224122788Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:a9485b9962b459697f11cffd4de1381f39fa497ea02710ddc2ce150b112f4314 UID:08eabde5-9057-44c1-9c3d-ee7388fc4224 NetNS:/var/run/netns/c21f186e-aaf1-4209-a87f-31f1658ba915 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0008fe430}] Aliases:map[]}"
	Nov 19 02:43:27 default-k8s-diff-port-167150 crio[770]: time="2025-11-19T02:43:27.224246976Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 19 02:43:27 default-k8s-diff-port-167150 crio[770]: time="2025-11-19T02:43:27.224978567Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 19 02:43:27 default-k8s-diff-port-167150 crio[770]: time="2025-11-19T02:43:27.226063793Z" level=info msg="Ran pod sandbox a9485b9962b459697f11cffd4de1381f39fa497ea02710ddc2ce150b112f4314 with infra container: default/busybox/POD" id=850bc819-a1e1-455f-855e-c87b083661b8 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 02:43:27 default-k8s-diff-port-167150 crio[770]: time="2025-11-19T02:43:27.227208938Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=dc17b019-bab1-43a7-a01a-3f82a9580887 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 02:43:27 default-k8s-diff-port-167150 crio[770]: time="2025-11-19T02:43:27.22731009Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=dc17b019-bab1-43a7-a01a-3f82a9580887 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 02:43:27 default-k8s-diff-port-167150 crio[770]: time="2025-11-19T02:43:27.227338976Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=dc17b019-bab1-43a7-a01a-3f82a9580887 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 02:43:27 default-k8s-diff-port-167150 crio[770]: time="2025-11-19T02:43:27.22809664Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=6cb2fa7d-e4c6-4fc9-9f88-c57855a12f7f name=/runtime.v1.ImageService/PullImage
	Nov 19 02:43:27 default-k8s-diff-port-167150 crio[770]: time="2025-11-19T02:43:27.229630354Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 19 02:43:27 default-k8s-diff-port-167150 crio[770]: time="2025-11-19T02:43:27.877408672Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=6cb2fa7d-e4c6-4fc9-9f88-c57855a12f7f name=/runtime.v1.ImageService/PullImage
	Nov 19 02:43:27 default-k8s-diff-port-167150 crio[770]: time="2025-11-19T02:43:27.877899064Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=67215f15-7639-4110-9596-16a3410fb4b7 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 02:43:27 default-k8s-diff-port-167150 crio[770]: time="2025-11-19T02:43:27.878979936Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=49fd44ec-a44b-4dca-88cb-36dda69704c6 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 02:43:27 default-k8s-diff-port-167150 crio[770]: time="2025-11-19T02:43:27.882177914Z" level=info msg="Creating container: default/busybox/busybox" id=89b538c9-4761-456e-94d0-64ac03bc6924 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 02:43:27 default-k8s-diff-port-167150 crio[770]: time="2025-11-19T02:43:27.882298124Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:43:27 default-k8s-diff-port-167150 crio[770]: time="2025-11-19T02:43:27.885801525Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:43:27 default-k8s-diff-port-167150 crio[770]: time="2025-11-19T02:43:27.886167446Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:43:27 default-k8s-diff-port-167150 crio[770]: time="2025-11-19T02:43:27.915151538Z" level=info msg="Created container 2a6d7afde0782b9b104d0306cdc59a5162c047fcbc56886004c730ead9fdfbc7: default/busybox/busybox" id=89b538c9-4761-456e-94d0-64ac03bc6924 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 02:43:27 default-k8s-diff-port-167150 crio[770]: time="2025-11-19T02:43:27.915691731Z" level=info msg="Starting container: 2a6d7afde0782b9b104d0306cdc59a5162c047fcbc56886004c730ead9fdfbc7" id=c284ce46-683b-49cf-bffe-d31f9562c485 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 02:43:27 default-k8s-diff-port-167150 crio[770]: time="2025-11-19T02:43:27.917278601Z" level=info msg="Started container" PID=1898 containerID=2a6d7afde0782b9b104d0306cdc59a5162c047fcbc56886004c730ead9fdfbc7 description=default/busybox/busybox id=c284ce46-683b-49cf-bffe-d31f9562c485 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a9485b9962b459697f11cffd4de1381f39fa497ea02710ddc2ce150b112f4314
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	2a6d7afde0782       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   a9485b9962b45       busybox                                                default
	db9670dae5d75       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      10 seconds ago      Running             coredns                   0                   8cf3ea61c7fed       coredns-66bc5c9577-bht2q                               kube-system
	36bcc7c8a4435       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 seconds ago      Running             storage-provisioner       0                   4bd3438216d60       storage-provisioner                                    kube-system
	ce1d8f5b87269       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      21 seconds ago      Running             kindnet-cni               0                   98363dfc8d8e5       kindnet-rs6jh                                          kube-system
	492cb41bd2e7c       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      21 seconds ago      Running             kube-proxy                0                   38556f9b9aa93       kube-proxy-8gl4n                                       kube-system
	97462423be65e       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      32 seconds ago      Running             kube-controller-manager   0                   4a309a39d6ce8       kube-controller-manager-default-k8s-diff-port-167150   kube-system
	f0bc2a7f1f86e       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      32 seconds ago      Running             kube-scheduler            0                   ee22c8d93c5a1       kube-scheduler-default-k8s-diff-port-167150            kube-system
	15837d4b98704       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      32 seconds ago      Running             etcd                      0                   d82c39a3714ad       etcd-default-k8s-diff-port-167150                      kube-system
	1c4e05de3b790       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      32 seconds ago      Running             kube-apiserver            0                   ebba8f53d6a8c       kube-apiserver-default-k8s-diff-port-167150            kube-system
	
	
	==> coredns [db9670dae5d75546b1028a8df2680f07aff9bf2d3e545a0f677bf0f48fee48bc] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45931 - 30124 "HINFO IN 5880141807881517942.1467693724383378056. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.44746403s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-167150
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-167150
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ebd49efe8b7d44f89fc96e21beffcd64dfee8277
	                    minikube.k8s.io/name=default-k8s-diff-port-167150
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T02_43_08_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 02:43:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-167150
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 02:43:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 02:43:24 +0000   Wed, 19 Nov 2025 02:43:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 02:43:24 +0000   Wed, 19 Nov 2025 02:43:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 02:43:24 +0000   Wed, 19 Nov 2025 02:43:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 02:43:24 +0000   Wed, 19 Nov 2025 02:43:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    default-k8s-diff-port-167150
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863340Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863340Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf10fb2f940d419c1d138723691cfee8
	  System UUID:                e0cfffa3-371a-463d-bbd7-aef4f2317c27
	  Boot ID:                    b74e7f3d-91af-4c1b-82f6-1745dfd2e678
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-66bc5c9577-bht2q                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     22s
	  kube-system                 etcd-default-k8s-diff-port-167150                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         28s
	  kube-system                 kindnet-rs6jh                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      23s
	  kube-system                 kube-apiserver-default-k8s-diff-port-167150             250m (3%)     0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-167150    200m (2%)     0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-proxy-8gl4n                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	  kube-system                 kube-scheduler-default-k8s-diff-port-167150             100m (1%)     0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21s                kube-proxy       
	  Normal  Starting                 33s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  33s (x8 over 33s)  kubelet          Node default-k8s-diff-port-167150 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    33s (x8 over 33s)  kubelet          Node default-k8s-diff-port-167150 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     33s (x8 over 33s)  kubelet          Node default-k8s-diff-port-167150 status is now: NodeHasSufficientPID
	  Normal  Starting                 28s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  28s                kubelet          Node default-k8s-diff-port-167150 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28s                kubelet          Node default-k8s-diff-port-167150 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28s                kubelet          Node default-k8s-diff-port-167150 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           23s                node-controller  Node default-k8s-diff-port-167150 event: Registered Node default-k8s-diff-port-167150 in Controller
	  Normal  NodeReady                11s                kubelet          Node default-k8s-diff-port-167150 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: c6 2a 10 38 98 ce 7e af 25 93 50 8b 08 00
	[Nov19 02:40] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 69 19 13 d2 34 08 06
	[  +0.000303] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff de 82 c7 57 ef 49 08 06
	[Nov19 02:41] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 82 b6 eb cd 7e a7 08 06
	[  +0.001170] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 6a 20 a4 3b 82 10 08 06
	[ +12.842438] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 1e 35 97 65 5e 2e 08 06
	[  +4.187285] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ba 20 d4 3c 25 0c 08 06
	[ +19.742639] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6e e8 d1 08 45 d2 08 06
	[  +0.000327] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ba 20 d4 3c 25 0c 08 06
	[Nov19 02:42] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 2b 58 8a 05 dc 08 06
	[  +0.000340] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 b6 eb cd 7e a7 08 06
	[ +10.661146] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b6 1d bb 8d c6 48 08 06
	[  +0.000382] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 1e 35 97 65 5e 2e 08 06
	
	
	==> etcd [15837d4b9870400aabe2dfa09abb223f4caf7f23741477d1611bdd111bf4915c] <==
	{"level":"warn","ts":"2025-11-19T02:43:04.393455Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:43:04.402559Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:43:04.409824Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:43:04.417121Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:43:04.424356Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:43:04.431687Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:43:04.438256Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:43:04.445423Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:43:04.452845Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:43:04.460162Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:43:04.466512Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:43:04.474883Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:43:04.486303Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:43:04.492745Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:43:04.499629Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:43:04.506947Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:43:04.513362Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:43:04.519503Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:43:04.526374Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:43:04.534794Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:43:04.545035Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:43:04.566539Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:43:04.574245Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:43:04.580491Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53638","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:43:04.634284Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53648","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 02:43:35 up  1:26,  0 user,  load average: 3.90, 3.29, 2.19
	Linux default-k8s-diff-port-167150 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [ce1d8f5b87269cc25957de9c74a72e4e6befaf41cd3877c59f81d0526df7eb6f] <==
	I1119 02:43:14.124986       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 02:43:14.125280       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1119 02:43:14.125394       1 main.go:148] setting mtu 1500 for CNI 
	I1119 02:43:14.125408       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 02:43:14.125428       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T02:43:14Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 02:43:14.419118       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 02:43:14.419159       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 02:43:14.419271       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 02:43:14.419577       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1119 02:43:14.819708       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1119 02:43:14.819742       1 metrics.go:72] Registering metrics
	I1119 02:43:14.819845       1 controller.go:711] "Syncing nftables rules"
	I1119 02:43:24.330343       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1119 02:43:24.330383       1 main.go:301] handling current node
	I1119 02:43:34.331519       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1119 02:43:34.331547       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1c4e05de3b7906df77e9ac5936d36a14cf5d69268c7f6b5ec11fe32be42a6c83] <==
	I1119 02:43:05.200502       1 aggregator.go:171] initial CRD sync complete...
	I1119 02:43:05.200515       1 autoregister_controller.go:144] Starting autoregister controller
	I1119 02:43:05.200522       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1119 02:43:05.200529       1 cache.go:39] Caches are synced for autoregister controller
	I1119 02:43:05.201971       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1119 02:43:05.221003       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1119 02:43:05.389167       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 02:43:06.096988       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1119 02:43:06.101149       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1119 02:43:06.101163       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 02:43:06.562045       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1119 02:43:06.597472       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1119 02:43:06.697091       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1119 02:43:06.703214       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1119 02:43:06.704355       1 controller.go:667] quota admission added evaluator for: endpoints
	I1119 02:43:06.708583       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1119 02:43:07.138200       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1119 02:43:07.902811       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1119 02:43:07.913301       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1119 02:43:07.922570       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1119 02:43:12.508308       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 02:43:12.515754       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 02:43:12.990157       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1119 02:43:13.139782       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1119 02:43:33.985792       1 conn.go:339] Error on socket receive: read tcp 192.168.94.2:8444->192.168.94.1:40958: use of closed network connection
	
	
	==> kube-controller-manager [97462423be65e04fb7de0db6c426a93fecb94fd4fce9ee7e8665ee8cf2f3a728] <==
	I1119 02:43:12.138338       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1119 02:43:12.138348       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1119 02:43:12.138419       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1119 02:43:12.138530       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1119 02:43:12.138631       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1119 02:43:12.138818       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1119 02:43:12.138848       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1119 02:43:12.138964       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1119 02:43:12.141656       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1119 02:43:12.141687       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1119 02:43:12.141694       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1119 02:43:12.141783       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1119 02:43:12.141845       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1119 02:43:12.141856       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1119 02:43:12.141862       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1119 02:43:12.142849       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 02:43:12.144043       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 02:43:12.148075       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-167150" podCIDRs=["10.244.0.0/24"]
	I1119 02:43:12.153766       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1119 02:43:12.153853       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1119 02:43:12.153924       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-167150"
	I1119 02:43:12.153997       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1119 02:43:12.159022       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 02:43:12.159025       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1119 02:43:27.156405       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [492cb41bd2e7c5514dec58fb1f0e6f75a6c1b909e00a0ce8ab3a391023d63419] <==
	I1119 02:43:14.008303       1 server_linux.go:53] "Using iptables proxy"
	I1119 02:43:14.075140       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 02:43:14.175731       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 02:43:14.175760       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1119 02:43:14.175874       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 02:43:14.194024       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 02:43:14.194080       1 server_linux.go:132] "Using iptables Proxier"
	I1119 02:43:14.199729       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 02:43:14.200496       1 server.go:527] "Version info" version="v1.34.1"
	I1119 02:43:14.200590       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 02:43:14.203021       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 02:43:14.203041       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 02:43:14.203063       1 config.go:200] "Starting service config controller"
	I1119 02:43:14.203069       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 02:43:14.203083       1 config.go:106] "Starting endpoint slice config controller"
	I1119 02:43:14.203087       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 02:43:14.203250       1 config.go:309] "Starting node config controller"
	I1119 02:43:14.203268       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 02:43:14.203276       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 02:43:14.303816       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1119 02:43:14.303825       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1119 02:43:14.303894       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [f0bc2a7f1f86e5887f28d3f6aa7c382f4e8244f9c5c94393e97a6865dbb9d665] <==
	E1119 02:43:05.149231       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1119 02:43:05.149248       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1119 02:43:05.149327       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1119 02:43:05.149334       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1119 02:43:05.149344       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1119 02:43:05.149372       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1119 02:43:05.149477       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1119 02:43:05.149523       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1119 02:43:05.149515       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1119 02:43:05.149555       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1119 02:43:05.149565       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1119 02:43:05.149599       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1119 02:43:05.149630       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1119 02:43:05.149650       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1119 02:43:05.149713       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1119 02:43:05.149788       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1119 02:43:05.958365       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1119 02:43:05.980133       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1119 02:43:06.056220       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1119 02:43:06.068641       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1119 02:43:06.171239       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1119 02:43:06.193666       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1119 02:43:06.231083       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1119 02:43:06.378617       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	I1119 02:43:08.648059       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 19 02:43:13 default-k8s-diff-port-167150 kubelet[1314]: I1119 02:43:13.019494    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rv7s7\" (UniqueName: \"kubernetes.io/projected/05ae880f-e69c-4513-b3ab-f76b85c4ac98-kube-api-access-rv7s7\") pod \"kindnet-rs6jh\" (UID: \"05ae880f-e69c-4513-b3ab-f76b85c4ac98\") " pod="kube-system/kindnet-rs6jh"
	Nov 19 02:43:13 default-k8s-diff-port-167150 kubelet[1314]: I1119 02:43:13.019594    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55pq8\" (UniqueName: \"kubernetes.io/projected/33cee4c4-dbb5-4bc2-becb-ef2654e266b0-kube-api-access-55pq8\") pod \"kube-proxy-8gl4n\" (UID: \"33cee4c4-dbb5-4bc2-becb-ef2654e266b0\") " pod="kube-system/kube-proxy-8gl4n"
	Nov 19 02:43:13 default-k8s-diff-port-167150 kubelet[1314]: I1119 02:43:13.019670    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/05ae880f-e69c-4513-b3ab-f76b85c4ac98-xtables-lock\") pod \"kindnet-rs6jh\" (UID: \"05ae880f-e69c-4513-b3ab-f76b85c4ac98\") " pod="kube-system/kindnet-rs6jh"
	Nov 19 02:43:13 default-k8s-diff-port-167150 kubelet[1314]: I1119 02:43:13.019696    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/05ae880f-e69c-4513-b3ab-f76b85c4ac98-lib-modules\") pod \"kindnet-rs6jh\" (UID: \"05ae880f-e69c-4513-b3ab-f76b85c4ac98\") " pod="kube-system/kindnet-rs6jh"
	Nov 19 02:43:13 default-k8s-diff-port-167150 kubelet[1314]: I1119 02:43:13.019721    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/33cee4c4-dbb5-4bc2-becb-ef2654e266b0-kube-proxy\") pod \"kube-proxy-8gl4n\" (UID: \"33cee4c4-dbb5-4bc2-becb-ef2654e266b0\") " pod="kube-system/kube-proxy-8gl4n"
	Nov 19 02:43:13 default-k8s-diff-port-167150 kubelet[1314]: I1119 02:43:13.019739    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/33cee4c4-dbb5-4bc2-becb-ef2654e266b0-lib-modules\") pod \"kube-proxy-8gl4n\" (UID: \"33cee4c4-dbb5-4bc2-becb-ef2654e266b0\") " pod="kube-system/kube-proxy-8gl4n"
	Nov 19 02:43:13 default-k8s-diff-port-167150 kubelet[1314]: I1119 02:43:13.019764    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/33cee4c4-dbb5-4bc2-becb-ef2654e266b0-xtables-lock\") pod \"kube-proxy-8gl4n\" (UID: \"33cee4c4-dbb5-4bc2-becb-ef2654e266b0\") " pod="kube-system/kube-proxy-8gl4n"
	Nov 19 02:43:13 default-k8s-diff-port-167150 kubelet[1314]: E1119 02:43:13.125833    1314 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 19 02:43:13 default-k8s-diff-port-167150 kubelet[1314]: E1119 02:43:13.125869    1314 projected.go:196] Error preparing data for projected volume kube-api-access-rv7s7 for pod kube-system/kindnet-rs6jh: configmap "kube-root-ca.crt" not found
	Nov 19 02:43:13 default-k8s-diff-port-167150 kubelet[1314]: E1119 02:43:13.125905    1314 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 19 02:43:13 default-k8s-diff-port-167150 kubelet[1314]: E1119 02:43:13.125921    1314 projected.go:196] Error preparing data for projected volume kube-api-access-55pq8 for pod kube-system/kube-proxy-8gl4n: configmap "kube-root-ca.crt" not found
	Nov 19 02:43:13 default-k8s-diff-port-167150 kubelet[1314]: E1119 02:43:13.125966    1314 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/05ae880f-e69c-4513-b3ab-f76b85c4ac98-kube-api-access-rv7s7 podName:05ae880f-e69c-4513-b3ab-f76b85c4ac98 nodeName:}" failed. No retries permitted until 2025-11-19 02:43:13.625937467 +0000 UTC m=+5.929702882 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-rv7s7" (UniqueName: "kubernetes.io/projected/05ae880f-e69c-4513-b3ab-f76b85c4ac98-kube-api-access-rv7s7") pod "kindnet-rs6jh" (UID: "05ae880f-e69c-4513-b3ab-f76b85c4ac98") : configmap "kube-root-ca.crt" not found
	Nov 19 02:43:13 default-k8s-diff-port-167150 kubelet[1314]: E1119 02:43:13.125985    1314 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/33cee4c4-dbb5-4bc2-becb-ef2654e266b0-kube-api-access-55pq8 podName:33cee4c4-dbb5-4bc2-becb-ef2654e266b0 nodeName:}" failed. No retries permitted until 2025-11-19 02:43:13.625975672 +0000 UTC m=+5.929741069 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-55pq8" (UniqueName: "kubernetes.io/projected/33cee4c4-dbb5-4bc2-becb-ef2654e266b0-kube-api-access-55pq8") pod "kube-proxy-8gl4n" (UID: "33cee4c4-dbb5-4bc2-becb-ef2654e266b0") : configmap "kube-root-ca.crt" not found
	Nov 19 02:43:14 default-k8s-diff-port-167150 kubelet[1314]: I1119 02:43:14.866176    1314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-rs6jh" podStartSLOduration=2.8661537409999998 podStartE2EDuration="2.866153741s" podCreationTimestamp="2025-11-19 02:43:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 02:43:14.854917637 +0000 UTC m=+7.158683054" watchObservedRunningTime="2025-11-19 02:43:14.866153741 +0000 UTC m=+7.169919158"
	Nov 19 02:43:16 default-k8s-diff-port-167150 kubelet[1314]: I1119 02:43:16.053932    1314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8gl4n" podStartSLOduration=4.053909011 podStartE2EDuration="4.053909011s" podCreationTimestamp="2025-11-19 02:43:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 02:43:14.866532349 +0000 UTC m=+7.170297769" watchObservedRunningTime="2025-11-19 02:43:16.053909011 +0000 UTC m=+8.357674430"
	Nov 19 02:43:24 default-k8s-diff-port-167150 kubelet[1314]: I1119 02:43:24.368559    1314 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 19 02:43:24 default-k8s-diff-port-167150 kubelet[1314]: I1119 02:43:24.401850    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/03ff5a52-b9d1-454f-ab4c-ca75268b32ef-tmp\") pod \"storage-provisioner\" (UID: \"03ff5a52-b9d1-454f-ab4c-ca75268b32ef\") " pod="kube-system/storage-provisioner"
	Nov 19 02:43:24 default-k8s-diff-port-167150 kubelet[1314]: I1119 02:43:24.401886    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6njfc\" (UniqueName: \"kubernetes.io/projected/03ff5a52-b9d1-454f-ab4c-ca75268b32ef-kube-api-access-6njfc\") pod \"storage-provisioner\" (UID: \"03ff5a52-b9d1-454f-ab4c-ca75268b32ef\") " pod="kube-system/storage-provisioner"
	Nov 19 02:43:24 default-k8s-diff-port-167150 kubelet[1314]: I1119 02:43:24.502852    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/67eaa46f-0f14-47fe-b518-8fc2339ac090-config-volume\") pod \"coredns-66bc5c9577-bht2q\" (UID: \"67eaa46f-0f14-47fe-b518-8fc2339ac090\") " pod="kube-system/coredns-66bc5c9577-bht2q"
	Nov 19 02:43:24 default-k8s-diff-port-167150 kubelet[1314]: I1119 02:43:24.502901    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btpw8\" (UniqueName: \"kubernetes.io/projected/67eaa46f-0f14-47fe-b518-8fc2339ac090-kube-api-access-btpw8\") pod \"coredns-66bc5c9577-bht2q\" (UID: \"67eaa46f-0f14-47fe-b518-8fc2339ac090\") " pod="kube-system/coredns-66bc5c9577-bht2q"
	Nov 19 02:43:24 default-k8s-diff-port-167150 kubelet[1314]: I1119 02:43:24.872787    1314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.872766325 podStartE2EDuration="12.872766325s" podCreationTimestamp="2025-11-19 02:43:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 02:43:24.872524316 +0000 UTC m=+17.176289736" watchObservedRunningTime="2025-11-19 02:43:24.872766325 +0000 UTC m=+17.176531743"
	Nov 19 02:43:26 default-k8s-diff-port-167150 kubelet[1314]: I1119 02:43:26.902830    1314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-bht2q" podStartSLOduration=13.9028027 podStartE2EDuration="13.9028027s" podCreationTimestamp="2025-11-19 02:43:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 02:43:24.882446679 +0000 UTC m=+17.186212098" watchObservedRunningTime="2025-11-19 02:43:26.9028027 +0000 UTC m=+19.206568118"
	Nov 19 02:43:27 default-k8s-diff-port-167150 kubelet[1314]: I1119 02:43:27.017198    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7r5fz\" (UniqueName: \"kubernetes.io/projected/08eabde5-9057-44c1-9c3d-ee7388fc4224-kube-api-access-7r5fz\") pod \"busybox\" (UID: \"08eabde5-9057-44c1-9c3d-ee7388fc4224\") " pod="default/busybox"
	Nov 19 02:43:28 default-k8s-diff-port-167150 kubelet[1314]: I1119 02:43:28.884717    1314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=2.233778142 podStartE2EDuration="2.8846912s" podCreationTimestamp="2025-11-19 02:43:26 +0000 UTC" firstStartedPulling="2025-11-19 02:43:27.227654145 +0000 UTC m=+19.531419558" lastFinishedPulling="2025-11-19 02:43:27.878567219 +0000 UTC m=+20.182332616" observedRunningTime="2025-11-19 02:43:28.884491766 +0000 UTC m=+21.188257184" watchObservedRunningTime="2025-11-19 02:43:28.8846912 +0000 UTC m=+21.188456617"
	Nov 19 02:43:33 default-k8s-diff-port-167150 kubelet[1314]: E1119 02:43:33.985722    1314 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:49456->127.0.0.1:33287: write tcp 127.0.0.1:49456->127.0.0.1:33287: write: broken pipe
	
	
	==> storage-provisioner [36bcc7c8a44358db8913e72b2fd4beb5bb3d5f6c21eb82113f9a1efbfc8dc88d] <==
	I1119 02:43:24.751508       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1119 02:43:24.761818       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1119 02:43:24.761875       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1119 02:43:24.763736       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:43:24.767987       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 02:43:24.768179       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1119 02:43:24.768347       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3714f73f-a3cc-42cd-ae7e-a03ea89c8e13", APIVersion:"v1", ResourceVersion:"441", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-167150_f71d2e20-6844-4dc5-a5ed-22067875216d became leader
	I1119 02:43:24.768451       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-167150_f71d2e20-6844-4dc5-a5ed-22067875216d!
	W1119 02:43:24.770233       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:43:24.774217       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 02:43:24.868908       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-167150_f71d2e20-6844-4dc5-a5ed-22067875216d!
	W1119 02:43:26.777816       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:43:26.782677       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:43:28.786279       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:43:28.791154       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:43:30.794215       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:43:30.799114       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:43:32.801512       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:43:32.805156       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:43:34.808389       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:43:34.812706       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-167150 -n default-k8s-diff-port-167150
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-167150 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-837474 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-837474 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (249.306352ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:43:37Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p no-preload-837474 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-837474 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-837474 describe deploy/metrics-server -n kube-system: exit status 1 (62.682073ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-837474 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-837474
helpers_test.go:243: (dbg) docker inspect no-preload-837474:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "778842a2abfda801d58e4b01b6627acceed2d2c8fd6cafbd93f5fa99edd1118b",
	        "Created": "2025-11-19T02:42:31.131345889Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 300620,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T02:42:31.163625205Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a368e3d71517ce17114afb6c9921965419df972dd0e2d32a9973a8946f0910a3",
	        "ResolvConfPath": "/var/lib/docker/containers/778842a2abfda801d58e4b01b6627acceed2d2c8fd6cafbd93f5fa99edd1118b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/778842a2abfda801d58e4b01b6627acceed2d2c8fd6cafbd93f5fa99edd1118b/hostname",
	        "HostsPath": "/var/lib/docker/containers/778842a2abfda801d58e4b01b6627acceed2d2c8fd6cafbd93f5fa99edd1118b/hosts",
	        "LogPath": "/var/lib/docker/containers/778842a2abfda801d58e4b01b6627acceed2d2c8fd6cafbd93f5fa99edd1118b/778842a2abfda801d58e4b01b6627acceed2d2c8fd6cafbd93f5fa99edd1118b-json.log",
	        "Name": "/no-preload-837474",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-837474:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-837474",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "778842a2abfda801d58e4b01b6627acceed2d2c8fd6cafbd93f5fa99edd1118b",
	                "LowerDir": "/var/lib/docker/overlay2/ff0558b4fd157e4fb015cbb400d3d61ed321012cbc9a2d31ec55e90dd718f480-init/diff:/var/lib/docker/overlay2/10dcbd04dd53463a9d8dc947ef22df9efa1b3a4d07707317c3869fa39d5b22a7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ff0558b4fd157e4fb015cbb400d3d61ed321012cbc9a2d31ec55e90dd718f480/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ff0558b4fd157e4fb015cbb400d3d61ed321012cbc9a2d31ec55e90dd718f480/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ff0558b4fd157e4fb015cbb400d3d61ed321012cbc9a2d31ec55e90dd718f480/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-837474",
	                "Source": "/var/lib/docker/volumes/no-preload-837474/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-837474",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-837474",
	                "name.minikube.sigs.k8s.io": "no-preload-837474",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "ff5b00aa73465a58bc6aa56fa5274600adf0644ce077e2d7d4d80b149da7a0cb",
	            "SandboxKey": "/var/run/docker/netns/ff5b00aa7346",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-837474": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d5cdcd8e70bb42a339d48cf84add8542874b20a7d1d91c5ad0bc5b1415ad92cb",
	                    "EndpointID": "542de673a9d564489df11aebc310f9f658d5b0416599032d9bc21d78fde5455d",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "1e:7b:e7:8c:52:84",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-837474",
	                        "778842a2abfd"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-837474 -n no-preload-837474
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-837474 logs -n 25
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-001617 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ bridge-001617                │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │                     │
	│ start   │ -p embed-certs-811173 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-811173           │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │ 19 Nov 25 02:43 UTC │
	│ ssh     │ -p bridge-001617 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ bridge-001617                │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │ 19 Nov 25 02:42 UTC │
	│ ssh     │ -p bridge-001617 sudo cri-dockerd --version                                                                                                                                                                                                   │ bridge-001617                │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │ 19 Nov 25 02:42 UTC │
	│ ssh     │ -p bridge-001617 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ bridge-001617                │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │                     │
	│ ssh     │ -p bridge-001617 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ bridge-001617                │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │ 19 Nov 25 02:42 UTC │
	│ ssh     │ -p bridge-001617 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ bridge-001617                │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │ 19 Nov 25 02:42 UTC │
	│ ssh     │ -p bridge-001617 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ bridge-001617                │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │ 19 Nov 25 02:42 UTC │
	│ ssh     │ -p bridge-001617 sudo containerd config dump                                                                                                                                                                                                  │ bridge-001617                │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │ 19 Nov 25 02:42 UTC │
	│ ssh     │ -p bridge-001617 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-001617                │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │ 19 Nov 25 02:42 UTC │
	│ ssh     │ -p bridge-001617 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-001617                │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │ 19 Nov 25 02:42 UTC │
	│ ssh     │ -p bridge-001617 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-001617                │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │ 19 Nov 25 02:42 UTC │
	│ ssh     │ -p bridge-001617 sudo crio config                                                                                                                                                                                                             │ bridge-001617                │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │ 19 Nov 25 02:42 UTC │
	│ delete  │ -p bridge-001617                                                                                                                                                                                                                              │ bridge-001617                │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │ 19 Nov 25 02:42 UTC │
	│ delete  │ -p disable-driver-mounts-682232                                                                                                                                                                                                               │ disable-driver-mounts-682232 │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │ 19 Nov 25 02:42 UTC │
	│ start   │ -p default-k8s-diff-port-167150 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-167150 │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │ 19 Nov 25 02:43 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-987573 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-987573       │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │                     │
	│ stop    │ -p old-k8s-version-987573 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-987573       │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:43 UTC │
	│ addons  │ enable metrics-server -p embed-certs-811173 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-811173           │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │                     │
	│ stop    │ -p embed-certs-811173 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-811173           │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-167150 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-167150 │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-167150 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-167150 │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-837474 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-837474            │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │                     │
	│ addons  │ enable dashboard -p old-k8s-version-987573 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-987573       │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:43 UTC │
	│ start   │ -p old-k8s-version-987573 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-987573       │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 02:43:37
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 02:43:37.056369  317113 out.go:360] Setting OutFile to fd 1 ...
	I1119 02:43:37.056516  317113 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:43:37.056527  317113 out.go:374] Setting ErrFile to fd 2...
	I1119 02:43:37.056533  317113 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:43:37.056748  317113 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-11126/.minikube/bin
	I1119 02:43:37.057208  317113 out.go:368] Setting JSON to false
	I1119 02:43:37.058537  317113 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5164,"bootTime":1763515053,"procs":278,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 02:43:37.058629  317113 start.go:143] virtualization: kvm guest
	I1119 02:43:37.060653  317113 out.go:179] * [old-k8s-version-987573] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1119 02:43:37.062337  317113 out.go:179]   - MINIKUBE_LOCATION=21924
	I1119 02:43:37.062362  317113 notify.go:221] Checking for updates...
	I1119 02:43:37.064862  317113 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 02:43:37.066278  317113 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21924-11126/kubeconfig
	I1119 02:43:37.067352  317113 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-11126/.minikube
	I1119 02:43:37.068458  317113 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1119 02:43:37.069570  317113 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 02:43:37.070901  317113 config.go:182] Loaded profile config "old-k8s-version-987573": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1119 02:43:37.072343  317113 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1119 02:43:37.073424  317113 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 02:43:37.097237  317113 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1119 02:43:37.097339  317113 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 02:43:37.160989  317113 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:69 OomKillDisable:false NGoroutines:83 SystemTime:2025-11-19 02:43:37.149487462 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652060160 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 02:43:37.161139  317113 docker.go:319] overlay module found
	I1119 02:43:37.162883  317113 out.go:179] * Using the docker driver based on existing profile
	I1119 02:43:37.164018  317113 start.go:309] selected driver: docker
	I1119 02:43:37.164035  317113 start.go:930] validating driver "docker" against &{Name:old-k8s-version-987573 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-987573 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:43:37.164141  317113 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 02:43:37.164898  317113 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 02:43:37.225797  317113 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:69 OomKillDisable:false NGoroutines:83 SystemTime:2025-11-19 02:43:37.216381208 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652060160 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 02:43:37.226155  317113 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 02:43:37.226198  317113 cni.go:84] Creating CNI manager for ""
	I1119 02:43:37.226262  317113 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 02:43:37.226311  317113 start.go:353] cluster config:
	{Name:old-k8s-version-987573 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-987573 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:43:37.228291  317113 out.go:179] * Starting "old-k8s-version-987573" primary control-plane node in "old-k8s-version-987573" cluster
	I1119 02:43:37.230081  317113 cache.go:134] Beginning downloading kic base image for docker with crio
	I1119 02:43:37.231222  317113 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1119 02:43:37.232254  317113 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1119 02:43:37.232295  317113 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21924-11126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1119 02:43:37.232312  317113 cache.go:65] Caching tarball of preloaded images
	I1119 02:43:37.232335  317113 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1119 02:43:37.232445  317113 preload.go:238] Found /home/jenkins/minikube-integration/21924-11126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1119 02:43:37.232463  317113 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1119 02:43:37.232598  317113 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/old-k8s-version-987573/config.json ...
	I1119 02:43:37.253966  317113 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1119 02:43:37.253990  317113 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1119 02:43:37.254009  317113 cache.go:243] Successfully downloaded all kic artifacts
	I1119 02:43:37.254035  317113 start.go:360] acquireMachinesLock for old-k8s-version-987573: {Name:mk6d181ff592d94ae92afdd06cd1a13c92915765 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 02:43:37.254110  317113 start.go:364] duration metric: took 39.844µs to acquireMachinesLock for "old-k8s-version-987573"
	I1119 02:43:37.254132  317113 start.go:96] Skipping create...Using existing machine configuration
	I1119 02:43:37.254138  317113 fix.go:54] fixHost starting: 
	I1119 02:43:37.254360  317113 cli_runner.go:164] Run: docker container inspect old-k8s-version-987573 --format={{.State.Status}}
	I1119 02:43:37.272934  317113 fix.go:112] recreateIfNeeded on old-k8s-version-987573: state=Stopped err=<nil>
	W1119 02:43:37.272968  317113 fix.go:138] unexpected machine state, will restart: <nil>
	
	
	==> CRI-O <==
	Nov 19 02:43:26 no-preload-837474 crio[767]: time="2025-11-19T02:43:26.891907061Z" level=info msg="Starting container: b0df30ee76e22e629eca443ca15be4fd516ddffa8619a9a0876649b26e018ba1" id=4e9c31ce-cbec-45b9-8d74-85102517dfe1 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 02:43:26 no-preload-837474 crio[767]: time="2025-11-19T02:43:26.893983627Z" level=info msg="Started container" PID=2930 containerID=b0df30ee76e22e629eca443ca15be4fd516ddffa8619a9a0876649b26e018ba1 description=kube-system/coredns-66bc5c9577-44bdr/coredns id=4e9c31ce-cbec-45b9-8d74-85102517dfe1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=96f6137447737ed1aca1b2bbc57b94dc384e27e33918d687c838a5d2a97c9970
	Nov 19 02:43:30 no-preload-837474 crio[767]: time="2025-11-19T02:43:30.058410314Z" level=info msg="Running pod sandbox: default/busybox/POD" id=7f2163de-04b2-407f-a50d-48237e7c4def name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 02:43:30 no-preload-837474 crio[767]: time="2025-11-19T02:43:30.058507629Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:43:30 no-preload-837474 crio[767]: time="2025-11-19T02:43:30.065360289Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:ce3a753096d27f50de9563beb395444db6801af3919d4bea0d2f03872fe7df1a UID:7c27bd17-157a-4f48-89a3-960cbf7e1a9c NetNS:/var/run/netns/2e38ea95-06e4-4b48-94a4-128b09b1aaff Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0004b6ce8}] Aliases:map[]}"
	Nov 19 02:43:30 no-preload-837474 crio[767]: time="2025-11-19T02:43:30.065385988Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 19 02:43:30 no-preload-837474 crio[767]: time="2025-11-19T02:43:30.075561019Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:ce3a753096d27f50de9563beb395444db6801af3919d4bea0d2f03872fe7df1a UID:7c27bd17-157a-4f48-89a3-960cbf7e1a9c NetNS:/var/run/netns/2e38ea95-06e4-4b48-94a4-128b09b1aaff Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0004b6ce8}] Aliases:map[]}"
	Nov 19 02:43:30 no-preload-837474 crio[767]: time="2025-11-19T02:43:30.075701353Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 19 02:43:30 no-preload-837474 crio[767]: time="2025-11-19T02:43:30.076356134Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 19 02:43:30 no-preload-837474 crio[767]: time="2025-11-19T02:43:30.077174942Z" level=info msg="Ran pod sandbox ce3a753096d27f50de9563beb395444db6801af3919d4bea0d2f03872fe7df1a with infra container: default/busybox/POD" id=7f2163de-04b2-407f-a50d-48237e7c4def name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 02:43:30 no-preload-837474 crio[767]: time="2025-11-19T02:43:30.078307729Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=06664632-6fc5-45c5-9cac-90cdeb176403 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 02:43:30 no-preload-837474 crio[767]: time="2025-11-19T02:43:30.078441505Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=06664632-6fc5-45c5-9cac-90cdeb176403 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 02:43:30 no-preload-837474 crio[767]: time="2025-11-19T02:43:30.078487014Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=06664632-6fc5-45c5-9cac-90cdeb176403 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 02:43:30 no-preload-837474 crio[767]: time="2025-11-19T02:43:30.079015747Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=4566eb48-49dc-4249-8dfb-4bf491a3d743 name=/runtime.v1.ImageService/PullImage
	Nov 19 02:43:30 no-preload-837474 crio[767]: time="2025-11-19T02:43:30.080289302Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 19 02:43:30 no-preload-837474 crio[767]: time="2025-11-19T02:43:30.712750907Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=4566eb48-49dc-4249-8dfb-4bf491a3d743 name=/runtime.v1.ImageService/PullImage
	Nov 19 02:43:30 no-preload-837474 crio[767]: time="2025-11-19T02:43:30.713335825Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=dd26bb60-1b4f-48de-be0c-1a565a402cf6 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 02:43:30 no-preload-837474 crio[767]: time="2025-11-19T02:43:30.714631204Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=792a18a2-6d4d-42d9-accc-d0cced194eb6 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 02:43:30 no-preload-837474 crio[767]: time="2025-11-19T02:43:30.717843214Z" level=info msg="Creating container: default/busybox/busybox" id=489b3ad2-8309-45a1-ad9e-4e2e7c283a03 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 02:43:30 no-preload-837474 crio[767]: time="2025-11-19T02:43:30.717967756Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:43:30 no-preload-837474 crio[767]: time="2025-11-19T02:43:30.721319282Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:43:30 no-preload-837474 crio[767]: time="2025-11-19T02:43:30.721822279Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:43:30 no-preload-837474 crio[767]: time="2025-11-19T02:43:30.747869054Z" level=info msg="Created container 9338e1acaa0055c502c40eb15d6c85d3cbf9fc21a55366e9ea52033875ddb527: default/busybox/busybox" id=489b3ad2-8309-45a1-ad9e-4e2e7c283a03 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 02:43:30 no-preload-837474 crio[767]: time="2025-11-19T02:43:30.748387673Z" level=info msg="Starting container: 9338e1acaa0055c502c40eb15d6c85d3cbf9fc21a55366e9ea52033875ddb527" id=4b2b30fb-2408-4e64-97de-f4e56343a175 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 02:43:30 no-preload-837474 crio[767]: time="2025-11-19T02:43:30.749945889Z" level=info msg="Started container" PID=3007 containerID=9338e1acaa0055c502c40eb15d6c85d3cbf9fc21a55366e9ea52033875ddb527 description=default/busybox/busybox id=4b2b30fb-2408-4e64-97de-f4e56343a175 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ce3a753096d27f50de9563beb395444db6801af3919d4bea0d2f03872fe7df1a
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	9338e1acaa005       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   ce3a753096d27       busybox                                     default
	b0df30ee76e22       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      11 seconds ago      Running             coredns                   0                   96f6137447737       coredns-66bc5c9577-44bdr                    kube-system
	f7cf2017d4ab9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 seconds ago      Running             storage-provisioner       0                   7405c8e986b09       storage-provisioner                         kube-system
	de9e97a26ef8a       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    22 seconds ago      Running             kindnet-cni               0                   b8bf06e2b15d2       kindnet-96d7l                               kube-system
	640e09fddbbe4       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      24 seconds ago      Running             kube-proxy                0                   25a498233cb6e       kube-proxy-hmxzk                            kube-system
	4f1526dce4dd7       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      35 seconds ago      Running             kube-controller-manager   0                   6a616b6d5e212       kube-controller-manager-no-preload-837474   kube-system
	8e707f50b07dc       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      35 seconds ago      Running             kube-apiserver            0                   a5b514db16cae       kube-apiserver-no-preload-837474            kube-system
	7d65c29bc69e8       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      35 seconds ago      Running             etcd                      0                   177ab8d71e227       etcd-no-preload-837474                      kube-system
	4e1375161d2c9       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      35 seconds ago      Running             kube-scheduler            0                   6e0aaad29d69e       kube-scheduler-no-preload-837474            kube-system
	
	
	==> coredns [b0df30ee76e22e629eca443ca15be4fd516ddffa8619a9a0876649b26e018ba1] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39046 - 48679 "HINFO IN 8400352663172843836.8060484365022219896. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.10909347s
	
	
	==> describe nodes <==
	Name:               no-preload-837474
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-837474
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ebd49efe8b7d44f89fc96e21beffcd64dfee8277
	                    minikube.k8s.io/name=no-preload-837474
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T02_43_08_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 02:43:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-837474
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 02:43:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 02:43:26 +0000   Wed, 19 Nov 2025 02:43:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 02:43:26 +0000   Wed, 19 Nov 2025 02:43:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 02:43:26 +0000   Wed, 19 Nov 2025 02:43:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 02:43:26 +0000   Wed, 19 Nov 2025 02:43:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    no-preload-837474
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863340Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863340Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf10fb2f940d419c1d138723691cfee8
	  System UUID:                1196f62d-ee96-4bda-889c-0da66532b529
	  Boot ID:                    b74e7f3d-91af-4c1b-82f6-1745dfd2e678
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-66bc5c9577-44bdr                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-no-preload-837474                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         31s
	  kube-system                 kindnet-96d7l                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-no-preload-837474             250m (3%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-no-preload-837474    200m (2%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-hmxzk                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-no-preload-837474             100m (1%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 24s                kube-proxy       
	  Normal  Starting                 36s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  36s (x8 over 36s)  kubelet          Node no-preload-837474 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    36s (x8 over 36s)  kubelet          Node no-preload-837474 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     36s (x8 over 36s)  kubelet          Node no-preload-837474 status is now: NodeHasSufficientPID
	  Normal  Starting                 31s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  31s                kubelet          Node no-preload-837474 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    31s                kubelet          Node no-preload-837474 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     31s                kubelet          Node no-preload-837474 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           27s                node-controller  Node no-preload-837474 event: Registered Node no-preload-837474 in Controller
	  Normal  NodeReady                12s                kubelet          Node no-preload-837474 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: c6 2a 10 38 98 ce 7e af 25 93 50 8b 08 00
	[Nov19 02:40] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 69 19 13 d2 34 08 06
	[  +0.000303] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff de 82 c7 57 ef 49 08 06
	[Nov19 02:41] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 82 b6 eb cd 7e a7 08 06
	[  +0.001170] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 6a 20 a4 3b 82 10 08 06
	[ +12.842438] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 1e 35 97 65 5e 2e 08 06
	[  +4.187285] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ba 20 d4 3c 25 0c 08 06
	[ +19.742639] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6e e8 d1 08 45 d2 08 06
	[  +0.000327] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ba 20 d4 3c 25 0c 08 06
	[Nov19 02:42] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 2b 58 8a 05 dc 08 06
	[  +0.000340] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 b6 eb cd 7e a7 08 06
	[ +10.661146] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b6 1d bb 8d c6 48 08 06
	[  +0.000382] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 1e 35 97 65 5e 2e 08 06
	
	
	==> etcd [7d65c29bc69e860bfb1e9bb02cbe8393d088c2d6281dc549bfebc2bf4cffc517] <==
	{"level":"warn","ts":"2025-11-19T02:43:04.291094Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:43:04.298046Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:43:04.307132Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:43:04.314225Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:43:04.321086Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:43:04.327772Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:43:04.333898Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:43:04.344494Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:43:04.353995Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52784","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:43:04.368014Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:43:04.387728Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:43:04.395222Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:43:04.401812Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:43:04.408217Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53032","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:43:04.415105Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:43:04.422140Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:43:04.428711Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:43:04.435894Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:43:04.443309Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:43:04.450245Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:43:04.457732Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:43:04.476638Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:43:04.485226Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:43:04.491427Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:43:04.548964Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53560","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 02:43:38 up  1:26,  0 user,  load average: 3.90, 3.29, 2.19
	Linux no-preload-837474 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [de9e97a26ef8a2ffad5ba06af53ea5ce4ac0082da7d973c30066ed9ef7161b8f] <==
	I1119 02:43:15.774920       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 02:43:15.775179       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1119 02:43:15.775333       1 main.go:148] setting mtu 1500 for CNI 
	I1119 02:43:15.775349       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 02:43:15.775368       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T02:43:15Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 02:43:15.978126       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 02:43:15.978173       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 02:43:15.978192       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 02:43:15.978681       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1119 02:43:16.379136       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1119 02:43:16.379159       1 metrics.go:72] Registering metrics
	I1119 02:43:16.379196       1 controller.go:711] "Syncing nftables rules"
	I1119 02:43:25.982022       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1119 02:43:25.982076       1 main.go:301] handling current node
	I1119 02:43:35.981525       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1119 02:43:35.981566       1 main.go:301] handling current node
	
	
	==> kube-apiserver [8e707f50b07dc5724d4b933544159d7076919d5201fb78568e7153061ae9b363] <==
	E1119 02:43:05.128992       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1119 02:43:05.152235       1 controller.go:667] quota admission added evaluator for: namespaces
	I1119 02:43:05.156506       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 02:43:05.156736       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1119 02:43:05.161403       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 02:43:05.161953       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1119 02:43:05.269738       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 02:43:05.955187       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1119 02:43:05.959649       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1119 02:43:05.959665       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 02:43:06.445342       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1119 02:43:06.483827       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1119 02:43:06.559280       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1119 02:43:06.565271       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1119 02:43:06.566286       1 controller.go:667] quota admission added evaluator for: endpoints
	I1119 02:43:06.570620       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1119 02:43:07.009796       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1119 02:43:07.572983       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1119 02:43:07.589980       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1119 02:43:07.601018       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1119 02:43:12.364513       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 02:43:12.368705       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 02:43:12.762963       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1119 02:43:13.013860       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1119 02:43:36.843288       1 conn.go:339] Error on socket receive: read tcp 192.168.103.2:8443->192.168.103.1:58878: use of closed network connection
	
	
	==> kube-controller-manager [4f1526dce4dd71b76ff4d5e9f84688267abcb6e77159f7373f4649389c4e0c91] <==
	I1119 02:43:11.972079       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-837474" podCIDRs=["10.244.0.0/24"]
	I1119 02:43:11.974143       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1119 02:43:12.008802       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1119 02:43:12.008841       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1119 02:43:12.008843       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1119 02:43:12.008896       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1119 02:43:12.010036       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1119 02:43:12.010059       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1119 02:43:12.010071       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1119 02:43:12.010081       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1119 02:43:12.010091       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1119 02:43:12.010135       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1119 02:43:12.010163       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1119 02:43:12.010201       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1119 02:43:12.012426       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1119 02:43:12.012509       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1119 02:43:12.013632       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1119 02:43:12.014885       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 02:43:12.015983       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 02:43:12.019186       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1119 02:43:12.031440       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 02:43:12.031459       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1119 02:43:12.031465       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1119 02:43:12.039721       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 02:43:26.966824       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [640e09fddbbe4bf4790952b09742bcdefc7623b85b585bdb3fe70cca973df07c] <==
	I1119 02:43:13.792268       1 server_linux.go:53] "Using iptables proxy"
	I1119 02:43:13.874915       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 02:43:13.975715       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 02:43:13.975772       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1119 02:43:13.975875       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 02:43:13.999856       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 02:43:13.999908       1 server_linux.go:132] "Using iptables Proxier"
	I1119 02:43:14.006084       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 02:43:14.006556       1 server.go:527] "Version info" version="v1.34.1"
	I1119 02:43:14.006591       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 02:43:14.008729       1 config.go:309] "Starting node config controller"
	I1119 02:43:14.008747       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 02:43:14.008756       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 02:43:14.008988       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 02:43:14.008997       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 02:43:14.009027       1 config.go:200] "Starting service config controller"
	I1119 02:43:14.009033       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 02:43:14.009047       1 config.go:106] "Starting endpoint slice config controller"
	I1119 02:43:14.009052       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 02:43:14.109463       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1119 02:43:14.109499       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1119 02:43:14.109529       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [4e1375161d2c918ca761573f00c879e55b262993cb52279149590e487c8bf8be] <==
	E1119 02:43:05.038833       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1119 02:43:05.038841       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1119 02:43:05.038890       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1119 02:43:05.038942       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1119 02:43:05.039191       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1119 02:43:05.039192       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1119 02:43:05.039610       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1119 02:43:05.040667       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1119 02:43:05.041259       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1119 02:43:05.041351       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1119 02:43:05.041568       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1119 02:43:05.041600       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1119 02:43:05.041677       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1119 02:43:05.041676       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1119 02:43:05.041805       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1119 02:43:05.041846       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1119 02:43:05.918961       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1119 02:43:05.936184       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1119 02:43:05.954358       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1119 02:43:06.059207       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1119 02:43:06.073481       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1119 02:43:06.125736       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1119 02:43:06.239038       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1119 02:43:06.356089       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1119 02:43:08.432474       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 19 02:43:12 no-preload-837474 kubelet[2310]: I1119 02:43:12.791557    2310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0cf4c9ca-e9e0-4bcc-8a93-40ff2d54df4b-lib-modules\") pod \"kube-proxy-hmxzk\" (UID: \"0cf4c9ca-e9e0-4bcc-8a93-40ff2d54df4b\") " pod="kube-system/kube-proxy-hmxzk"
	Nov 19 02:43:12 no-preload-837474 kubelet[2310]: I1119 02:43:12.791602    2310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smblm\" (UniqueName: \"kubernetes.io/projected/0cf4c9ca-e9e0-4bcc-8a93-40ff2d54df4b-kube-api-access-smblm\") pod \"kube-proxy-hmxzk\" (UID: \"0cf4c9ca-e9e0-4bcc-8a93-40ff2d54df4b\") " pod="kube-system/kube-proxy-hmxzk"
	Nov 19 02:43:12 no-preload-837474 kubelet[2310]: I1119 02:43:12.791624    2310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d8eb5197-7836-4ec2-9fe3-e6354983a150-lib-modules\") pod \"kindnet-96d7l\" (UID: \"d8eb5197-7836-4ec2-9fe3-e6354983a150\") " pod="kube-system/kindnet-96d7l"
	Nov 19 02:43:12 no-preload-837474 kubelet[2310]: I1119 02:43:12.791673    2310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0cf4c9ca-e9e0-4bcc-8a93-40ff2d54df4b-kube-proxy\") pod \"kube-proxy-hmxzk\" (UID: \"0cf4c9ca-e9e0-4bcc-8a93-40ff2d54df4b\") " pod="kube-system/kube-proxy-hmxzk"
	Nov 19 02:43:12 no-preload-837474 kubelet[2310]: I1119 02:43:12.791702    2310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/d8eb5197-7836-4ec2-9fe3-e6354983a150-cni-cfg\") pod \"kindnet-96d7l\" (UID: \"d8eb5197-7836-4ec2-9fe3-e6354983a150\") " pod="kube-system/kindnet-96d7l"
	Nov 19 02:43:12 no-preload-837474 kubelet[2310]: I1119 02:43:12.791723    2310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d8eb5197-7836-4ec2-9fe3-e6354983a150-xtables-lock\") pod \"kindnet-96d7l\" (UID: \"d8eb5197-7836-4ec2-9fe3-e6354983a150\") " pod="kube-system/kindnet-96d7l"
	Nov 19 02:43:12 no-preload-837474 kubelet[2310]: I1119 02:43:12.791803    2310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfmb6\" (UniqueName: \"kubernetes.io/projected/d8eb5197-7836-4ec2-9fe3-e6354983a150-kube-api-access-gfmb6\") pod \"kindnet-96d7l\" (UID: \"d8eb5197-7836-4ec2-9fe3-e6354983a150\") " pod="kube-system/kindnet-96d7l"
	Nov 19 02:43:12 no-preload-837474 kubelet[2310]: I1119 02:43:12.791848    2310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0cf4c9ca-e9e0-4bcc-8a93-40ff2d54df4b-xtables-lock\") pod \"kube-proxy-hmxzk\" (UID: \"0cf4c9ca-e9e0-4bcc-8a93-40ff2d54df4b\") " pod="kube-system/kube-proxy-hmxzk"
	Nov 19 02:43:12 no-preload-837474 kubelet[2310]: E1119 02:43:12.899576    2310 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 19 02:43:12 no-preload-837474 kubelet[2310]: E1119 02:43:12.899606    2310 projected.go:196] Error preparing data for projected volume kube-api-access-smblm for pod kube-system/kube-proxy-hmxzk: configmap "kube-root-ca.crt" not found
	Nov 19 02:43:12 no-preload-837474 kubelet[2310]: E1119 02:43:12.899586    2310 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 19 02:43:12 no-preload-837474 kubelet[2310]: E1119 02:43:12.899693    2310 projected.go:196] Error preparing data for projected volume kube-api-access-gfmb6 for pod kube-system/kindnet-96d7l: configmap "kube-root-ca.crt" not found
	Nov 19 02:43:12 no-preload-837474 kubelet[2310]: E1119 02:43:12.899691    2310 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0cf4c9ca-e9e0-4bcc-8a93-40ff2d54df4b-kube-api-access-smblm podName:0cf4c9ca-e9e0-4bcc-8a93-40ff2d54df4b nodeName:}" failed. No retries permitted until 2025-11-19 02:43:13.399662781 +0000 UTC m=+6.031628664 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-smblm" (UniqueName: "kubernetes.io/projected/0cf4c9ca-e9e0-4bcc-8a93-40ff2d54df4b-kube-api-access-smblm") pod "kube-proxy-hmxzk" (UID: "0cf4c9ca-e9e0-4bcc-8a93-40ff2d54df4b") : configmap "kube-root-ca.crt" not found
	Nov 19 02:43:12 no-preload-837474 kubelet[2310]: E1119 02:43:12.899789    2310 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d8eb5197-7836-4ec2-9fe3-e6354983a150-kube-api-access-gfmb6 podName:d8eb5197-7836-4ec2-9fe3-e6354983a150 nodeName:}" failed. No retries permitted until 2025-11-19 02:43:13.399772219 +0000 UTC m=+6.031738116 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gfmb6" (UniqueName: "kubernetes.io/projected/d8eb5197-7836-4ec2-9fe3-e6354983a150-kube-api-access-gfmb6") pod "kindnet-96d7l" (UID: "d8eb5197-7836-4ec2-9fe3-e6354983a150") : configmap "kube-root-ca.crt" not found
	Nov 19 02:43:14 no-preload-837474 kubelet[2310]: I1119 02:43:14.545399    2310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hmxzk" podStartSLOduration=2.545379035 podStartE2EDuration="2.545379035s" podCreationTimestamp="2025-11-19 02:43:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 02:43:14.545304582 +0000 UTC m=+7.177270484" watchObservedRunningTime="2025-11-19 02:43:14.545379035 +0000 UTC m=+7.177344937"
	Nov 19 02:43:16 no-preload-837474 kubelet[2310]: I1119 02:43:16.559622    2310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-96d7l" podStartSLOduration=2.682747665 podStartE2EDuration="4.559602121s" podCreationTimestamp="2025-11-19 02:43:12 +0000 UTC" firstStartedPulling="2025-11-19 02:43:13.692710308 +0000 UTC m=+6.324676203" lastFinishedPulling="2025-11-19 02:43:15.569564764 +0000 UTC m=+8.201530659" observedRunningTime="2025-11-19 02:43:16.559500926 +0000 UTC m=+9.191466829" watchObservedRunningTime="2025-11-19 02:43:16.559602121 +0000 UTC m=+9.191568024"
	Nov 19 02:43:26 no-preload-837474 kubelet[2310]: I1119 02:43:26.510397    2310 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 19 02:43:26 no-preload-837474 kubelet[2310]: I1119 02:43:26.585921    2310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlwvh\" (UniqueName: \"kubernetes.io/projected/9ad0000a-752a-4a18-a649-dd63b3e638d9-kube-api-access-rlwvh\") pod \"coredns-66bc5c9577-44bdr\" (UID: \"9ad0000a-752a-4a18-a649-dd63b3e638d9\") " pod="kube-system/coredns-66bc5c9577-44bdr"
	Nov 19 02:43:26 no-preload-837474 kubelet[2310]: I1119 02:43:26.585956    2310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/7b82e1eb-4a04-4145-8163-28073775b6ed-tmp\") pod \"storage-provisioner\" (UID: \"7b82e1eb-4a04-4145-8163-28073775b6ed\") " pod="kube-system/storage-provisioner"
	Nov 19 02:43:26 no-preload-837474 kubelet[2310]: I1119 02:43:26.585974    2310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbbcb\" (UniqueName: \"kubernetes.io/projected/7b82e1eb-4a04-4145-8163-28073775b6ed-kube-api-access-qbbcb\") pod \"storage-provisioner\" (UID: \"7b82e1eb-4a04-4145-8163-28073775b6ed\") " pod="kube-system/storage-provisioner"
	Nov 19 02:43:26 no-preload-837474 kubelet[2310]: I1119 02:43:26.585991    2310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9ad0000a-752a-4a18-a649-dd63b3e638d9-config-volume\") pod \"coredns-66bc5c9577-44bdr\" (UID: \"9ad0000a-752a-4a18-a649-dd63b3e638d9\") " pod="kube-system/coredns-66bc5c9577-44bdr"
	Nov 19 02:43:27 no-preload-837474 kubelet[2310]: I1119 02:43:27.575037    2310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-44bdr" podStartSLOduration=14.575017219 podStartE2EDuration="14.575017219s" podCreationTimestamp="2025-11-19 02:43:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 02:43:27.574801822 +0000 UTC m=+20.206767725" watchObservedRunningTime="2025-11-19 02:43:27.575017219 +0000 UTC m=+20.206983140"
	Nov 19 02:43:27 no-preload-837474 kubelet[2310]: I1119 02:43:27.584510    2310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.584490527 podStartE2EDuration="13.584490527s" podCreationTimestamp="2025-11-19 02:43:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 02:43:27.584224292 +0000 UTC m=+20.216190194" watchObservedRunningTime="2025-11-19 02:43:27.584490527 +0000 UTC m=+20.216456428"
	Nov 19 02:43:29 no-preload-837474 kubelet[2310]: I1119 02:43:29.805646    2310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72tv9\" (UniqueName: \"kubernetes.io/projected/7c27bd17-157a-4f48-89a3-960cbf7e1a9c-kube-api-access-72tv9\") pod \"busybox\" (UID: \"7c27bd17-157a-4f48-89a3-960cbf7e1a9c\") " pod="default/busybox"
	Nov 19 02:43:31 no-preload-837474 kubelet[2310]: I1119 02:43:31.583592    2310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.94818475 podStartE2EDuration="2.583574795s" podCreationTimestamp="2025-11-19 02:43:29 +0000 UTC" firstStartedPulling="2025-11-19 02:43:30.078702834 +0000 UTC m=+22.710668714" lastFinishedPulling="2025-11-19 02:43:30.714092879 +0000 UTC m=+23.346058759" observedRunningTime="2025-11-19 02:43:31.583308358 +0000 UTC m=+24.215274262" watchObservedRunningTime="2025-11-19 02:43:31.583574795 +0000 UTC m=+24.215540696"
	
	
	==> storage-provisioner [f7cf2017d4ab9cfae22d2df307fb33cd4406cc88c66e359e47a85da3203890d7] <==
	I1119 02:43:26.899923       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1119 02:43:26.911046       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1119 02:43:26.911192       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1119 02:43:26.913771       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:43:26.919770       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 02:43:26.919996       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1119 02:43:26.920169       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-837474_88a5ba31-596c-4e07-984e-f92bef6e543b!
	I1119 02:43:26.920261       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5860216f-7052-4908-a51f-f754ee84ec87", APIVersion:"v1", ResourceVersion:"449", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-837474_88a5ba31-596c-4e07-984e-f92bef6e543b became leader
	W1119 02:43:26.922695       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:43:26.926308       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 02:43:27.021139       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-837474_88a5ba31-596c-4e07-984e-f92bef6e543b!
	W1119 02:43:28.930322       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:43:28.934295       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:43:30.937313       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:43:30.943019       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:43:32.945740       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:43:32.949314       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:43:34.952112       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:43:34.956267       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:43:36.960010       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:43:36.963894       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-837474 -n no-preload-837474
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-837474 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (5.91s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-987573 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p old-k8s-version-987573 --alsologtostderr -v=1: exit status 80 (2.215089537s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-987573 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 02:44:20.815393  328538 out.go:360] Setting OutFile to fd 1 ...
	I1119 02:44:20.815615  328538 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:44:20.815625  328538 out.go:374] Setting ErrFile to fd 2...
	I1119 02:44:20.815629  328538 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:44:20.815800  328538 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-11126/.minikube/bin
	I1119 02:44:20.816012  328538 out.go:368] Setting JSON to false
	I1119 02:44:20.816061  328538 mustload.go:66] Loading cluster: old-k8s-version-987573
	I1119 02:44:20.816360  328538 config.go:182] Loaded profile config "old-k8s-version-987573": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1119 02:44:20.816744  328538 cli_runner.go:164] Run: docker container inspect old-k8s-version-987573 --format={{.State.Status}}
	I1119 02:44:20.835664  328538 host.go:66] Checking if "old-k8s-version-987573" exists ...
	I1119 02:44:20.835908  328538 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 02:44:20.893302  328538 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:87 SystemTime:2025-11-19 02:44:20.883866216 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652060160 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 02:44:20.893988  328538 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-987573 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1119 02:44:20.895747  328538 out.go:179] * Pausing node old-k8s-version-987573 ... 
	I1119 02:44:20.896809  328538 host.go:66] Checking if "old-k8s-version-987573" exists ...
	I1119 02:44:20.897058  328538 ssh_runner.go:195] Run: systemctl --version
	I1119 02:44:20.897106  328538 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-987573
	I1119 02:44:20.913832  328538 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/old-k8s-version-987573/id_rsa Username:docker}
	I1119 02:44:21.007821  328538 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 02:44:21.019630  328538 pause.go:52] kubelet running: true
	I1119 02:44:21.019680  328538 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1119 02:44:21.183044  328538 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1119 02:44:21.183137  328538 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1119 02:44:21.246470  328538 cri.go:89] found id: "2077593bc532cc1d14a90e1072ea50812cb23766a988f2cc7d1e6b8f14c3b0ee"
	I1119 02:44:21.246493  328538 cri.go:89] found id: "0be6a37a592243280ea5c142186391f3f2f26b568b8a07102398749bd16bb41a"
	I1119 02:44:21.246497  328538 cri.go:89] found id: "9838e80b4a11306fc1d1ad1687e1efac6e8267ea4a072326986fd003834c2d07"
	I1119 02:44:21.246501  328538 cri.go:89] found id: "de11ec22f706c83cf86b25166cee8deb1a767a0d7161e431ab4ff464ea56370e"
	I1119 02:44:21.246504  328538 cri.go:89] found id: "5ade0b93851f29eec6e4a88852e33c753160da8ea44034a1ae4d3403b4213d7b"
	I1119 02:44:21.246507  328538 cri.go:89] found id: "698573dc69a8a06012cd23a1989bd77a62894912ddd2392fb3c8adab817e74a2"
	I1119 02:44:21.246509  328538 cri.go:89] found id: "0b21b4a61c9e39b222029f13c6ca3c909e31c027914e269966be2589940c1b05"
	I1119 02:44:21.246512  328538 cri.go:89] found id: "52e10aa72ed87bdafda6e448ab0fe9236452ea9f877e2c66f9761af96e094140"
	I1119 02:44:21.246514  328538 cri.go:89] found id: "a3a95c851a1b1a7b23770436d155ba0f868406c9e5408bb1d6b801e15b851212"
	I1119 02:44:21.246523  328538 cri.go:89] found id: "c8e88b5f77554f0c0105232fbd1aa6d9713330da86439ef7081b285a8151c78e"
	I1119 02:44:21.246526  328538 cri.go:89] found id: "a0016258b0d08479349678ea97b542cd6bed29e5be0daa43e282fc63d368df4b"
	I1119 02:44:21.246528  328538 cri.go:89] found id: ""
	I1119 02:44:21.246568  328538 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 02:44:21.258106  328538 retry.go:31] will retry after 244.380301ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:44:21Z" level=error msg="open /run/runc: no such file or directory"
	I1119 02:44:21.503618  328538 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 02:44:21.516247  328538 pause.go:52] kubelet running: false
	I1119 02:44:21.516290  328538 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1119 02:44:21.661507  328538 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1119 02:44:21.661602  328538 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1119 02:44:21.726598  328538 cri.go:89] found id: "2077593bc532cc1d14a90e1072ea50812cb23766a988f2cc7d1e6b8f14c3b0ee"
	I1119 02:44:21.726626  328538 cri.go:89] found id: "0be6a37a592243280ea5c142186391f3f2f26b568b8a07102398749bd16bb41a"
	I1119 02:44:21.726633  328538 cri.go:89] found id: "9838e80b4a11306fc1d1ad1687e1efac6e8267ea4a072326986fd003834c2d07"
	I1119 02:44:21.726638  328538 cri.go:89] found id: "de11ec22f706c83cf86b25166cee8deb1a767a0d7161e431ab4ff464ea56370e"
	I1119 02:44:21.726642  328538 cri.go:89] found id: "5ade0b93851f29eec6e4a88852e33c753160da8ea44034a1ae4d3403b4213d7b"
	I1119 02:44:21.726647  328538 cri.go:89] found id: "698573dc69a8a06012cd23a1989bd77a62894912ddd2392fb3c8adab817e74a2"
	I1119 02:44:21.726651  328538 cri.go:89] found id: "0b21b4a61c9e39b222029f13c6ca3c909e31c027914e269966be2589940c1b05"
	I1119 02:44:21.726655  328538 cri.go:89] found id: "52e10aa72ed87bdafda6e448ab0fe9236452ea9f877e2c66f9761af96e094140"
	I1119 02:44:21.726659  328538 cri.go:89] found id: "a3a95c851a1b1a7b23770436d155ba0f868406c9e5408bb1d6b801e15b851212"
	I1119 02:44:21.726666  328538 cri.go:89] found id: "c8e88b5f77554f0c0105232fbd1aa6d9713330da86439ef7081b285a8151c78e"
	I1119 02:44:21.726670  328538 cri.go:89] found id: "a0016258b0d08479349678ea97b542cd6bed29e5be0daa43e282fc63d368df4b"
	I1119 02:44:21.726675  328538 cri.go:89] found id: ""
	I1119 02:44:21.726711  328538 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 02:44:21.738317  328538 retry.go:31] will retry after 335.71099ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:44:21Z" level=error msg="open /run/runc: no such file or directory"
	I1119 02:44:22.074584  328538 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 02:44:22.087659  328538 pause.go:52] kubelet running: false
	I1119 02:44:22.087700  328538 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1119 02:44:22.228859  328538 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1119 02:44:22.228976  328538 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1119 02:44:22.292402  328538 cri.go:89] found id: "2077593bc532cc1d14a90e1072ea50812cb23766a988f2cc7d1e6b8f14c3b0ee"
	I1119 02:44:22.292428  328538 cri.go:89] found id: "0be6a37a592243280ea5c142186391f3f2f26b568b8a07102398749bd16bb41a"
	I1119 02:44:22.292445  328538 cri.go:89] found id: "9838e80b4a11306fc1d1ad1687e1efac6e8267ea4a072326986fd003834c2d07"
	I1119 02:44:22.292451  328538 cri.go:89] found id: "de11ec22f706c83cf86b25166cee8deb1a767a0d7161e431ab4ff464ea56370e"
	I1119 02:44:22.292455  328538 cri.go:89] found id: "5ade0b93851f29eec6e4a88852e33c753160da8ea44034a1ae4d3403b4213d7b"
	I1119 02:44:22.292460  328538 cri.go:89] found id: "698573dc69a8a06012cd23a1989bd77a62894912ddd2392fb3c8adab817e74a2"
	I1119 02:44:22.292464  328538 cri.go:89] found id: "0b21b4a61c9e39b222029f13c6ca3c909e31c027914e269966be2589940c1b05"
	I1119 02:44:22.292468  328538 cri.go:89] found id: "52e10aa72ed87bdafda6e448ab0fe9236452ea9f877e2c66f9761af96e094140"
	I1119 02:44:22.292473  328538 cri.go:89] found id: "a3a95c851a1b1a7b23770436d155ba0f868406c9e5408bb1d6b801e15b851212"
	I1119 02:44:22.292489  328538 cri.go:89] found id: "c8e88b5f77554f0c0105232fbd1aa6d9713330da86439ef7081b285a8151c78e"
	I1119 02:44:22.292497  328538 cri.go:89] found id: "a0016258b0d08479349678ea97b542cd6bed29e5be0daa43e282fc63d368df4b"
	I1119 02:44:22.292499  328538 cri.go:89] found id: ""
	I1119 02:44:22.292535  328538 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 02:44:22.303630  328538 retry.go:31] will retry after 435.431241ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:44:22Z" level=error msg="open /run/runc: no such file or directory"
	I1119 02:44:22.739221  328538 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 02:44:22.751823  328538 pause.go:52] kubelet running: false
	I1119 02:44:22.751884  328538 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1119 02:44:22.888031  328538 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1119 02:44:22.888108  328538 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1119 02:44:22.953087  328538 cri.go:89] found id: "2077593bc532cc1d14a90e1072ea50812cb23766a988f2cc7d1e6b8f14c3b0ee"
	I1119 02:44:22.953111  328538 cri.go:89] found id: "0be6a37a592243280ea5c142186391f3f2f26b568b8a07102398749bd16bb41a"
	I1119 02:44:22.953115  328538 cri.go:89] found id: "9838e80b4a11306fc1d1ad1687e1efac6e8267ea4a072326986fd003834c2d07"
	I1119 02:44:22.953118  328538 cri.go:89] found id: "de11ec22f706c83cf86b25166cee8deb1a767a0d7161e431ab4ff464ea56370e"
	I1119 02:44:22.953120  328538 cri.go:89] found id: "5ade0b93851f29eec6e4a88852e33c753160da8ea44034a1ae4d3403b4213d7b"
	I1119 02:44:22.953123  328538 cri.go:89] found id: "698573dc69a8a06012cd23a1989bd77a62894912ddd2392fb3c8adab817e74a2"
	I1119 02:44:22.953126  328538 cri.go:89] found id: "0b21b4a61c9e39b222029f13c6ca3c909e31c027914e269966be2589940c1b05"
	I1119 02:44:22.953128  328538 cri.go:89] found id: "52e10aa72ed87bdafda6e448ab0fe9236452ea9f877e2c66f9761af96e094140"
	I1119 02:44:22.953131  328538 cri.go:89] found id: "a3a95c851a1b1a7b23770436d155ba0f868406c9e5408bb1d6b801e15b851212"
	I1119 02:44:22.953136  328538 cri.go:89] found id: "c8e88b5f77554f0c0105232fbd1aa6d9713330da86439ef7081b285a8151c78e"
	I1119 02:44:22.953138  328538 cri.go:89] found id: "a0016258b0d08479349678ea97b542cd6bed29e5be0daa43e282fc63d368df4b"
	I1119 02:44:22.953140  328538 cri.go:89] found id: ""
	I1119 02:44:22.953182  328538 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 02:44:22.967151  328538 out.go:203] 
	W1119 02:44:22.968352  328538 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:44:22Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:44:22Z" level=error msg="open /run/runc: no such file or directory"
	
	W1119 02:44:22.968370  328538 out.go:285] * 
	* 
	W1119 02:44:22.972735  328538 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1119 02:44:22.973808  328538 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p old-k8s-version-987573 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-987573
helpers_test.go:243: (dbg) docker inspect old-k8s-version-987573:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ae750ceb959b3b185a87925748a821dda02d033dac6848c57030347d6edbda71",
	        "Created": "2025-11-19T02:42:22.008498904Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 317433,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T02:43:37.301287263Z",
	            "FinishedAt": "2025-11-19T02:43:36.3737728Z"
	        },
	        "Image": "sha256:a368e3d71517ce17114afb6c9921965419df972dd0e2d32a9973a8946f0910a3",
	        "ResolvConfPath": "/var/lib/docker/containers/ae750ceb959b3b185a87925748a821dda02d033dac6848c57030347d6edbda71/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ae750ceb959b3b185a87925748a821dda02d033dac6848c57030347d6edbda71/hostname",
	        "HostsPath": "/var/lib/docker/containers/ae750ceb959b3b185a87925748a821dda02d033dac6848c57030347d6edbda71/hosts",
	        "LogPath": "/var/lib/docker/containers/ae750ceb959b3b185a87925748a821dda02d033dac6848c57030347d6edbda71/ae750ceb959b3b185a87925748a821dda02d033dac6848c57030347d6edbda71-json.log",
	        "Name": "/old-k8s-version-987573",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-987573:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-987573",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ae750ceb959b3b185a87925748a821dda02d033dac6848c57030347d6edbda71",
	                "LowerDir": "/var/lib/docker/overlay2/67a2eee882f8978eae59dbc9ac2c8a6169ceac2cd04882ad01ae0421935fe202-init/diff:/var/lib/docker/overlay2/10dcbd04dd53463a9d8dc947ef22df9efa1b3a4d07707317c3869fa39d5b22a7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/67a2eee882f8978eae59dbc9ac2c8a6169ceac2cd04882ad01ae0421935fe202/merged",
	                "UpperDir": "/var/lib/docker/overlay2/67a2eee882f8978eae59dbc9ac2c8a6169ceac2cd04882ad01ae0421935fe202/diff",
	                "WorkDir": "/var/lib/docker/overlay2/67a2eee882f8978eae59dbc9ac2c8a6169ceac2cd04882ad01ae0421935fe202/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-987573",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-987573/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-987573",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-987573",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-987573",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "17e03c5e389717bc05f7a2427c9b292a19f2d4f7bf1480acc544b4d4c621f4c1",
	            "SandboxKey": "/var/run/docker/netns/17e03c5e3897",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33108"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33109"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33112"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33111"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-987573": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4d7fb52c0aef23ee545f7da5c971e8a8676f2221b08ae8d87614b0f88b577986",
	                    "EndpointID": "4f8b7b3e81a725a6a839f84ec17adb0209131308bf240214d3974abb607b26b9",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "a2:a0:b3:db:af:54",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-987573",
	                        "ae750ceb959b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-987573 -n old-k8s-version-987573
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-987573 -n old-k8s-version-987573: exit status 2 (312.081651ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-987573 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-987573 logs -n 25: (1.056329044s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-001617 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-001617                │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │ 19 Nov 25 02:42 UTC │
	│ ssh     │ -p bridge-001617 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-001617                │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │ 19 Nov 25 02:42 UTC │
	│ ssh     │ -p bridge-001617 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-001617                │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │ 19 Nov 25 02:42 UTC │
	│ ssh     │ -p bridge-001617 sudo crio config                                                                                                                                                                                                             │ bridge-001617                │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │ 19 Nov 25 02:42 UTC │
	│ delete  │ -p bridge-001617                                                                                                                                                                                                                              │ bridge-001617                │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │ 19 Nov 25 02:42 UTC │
	│ delete  │ -p disable-driver-mounts-682232                                                                                                                                                                                                               │ disable-driver-mounts-682232 │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │ 19 Nov 25 02:42 UTC │
	│ start   │ -p default-k8s-diff-port-167150 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-167150 │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │ 19 Nov 25 02:43 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-987573 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-987573       │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │                     │
	│ stop    │ -p old-k8s-version-987573 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-987573       │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:43 UTC │
	│ addons  │ enable metrics-server -p embed-certs-811173 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-811173           │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │                     │
	│ stop    │ -p embed-certs-811173 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-811173           │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:43 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-167150 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-167150 │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-167150 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-167150 │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:43 UTC │
	│ addons  │ enable metrics-server -p no-preload-837474 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-837474            │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │                     │
	│ addons  │ enable dashboard -p old-k8s-version-987573 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-987573       │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:43 UTC │
	│ start   │ -p old-k8s-version-987573 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-987573       │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:44 UTC │
	│ stop    │ -p no-preload-837474 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-837474            │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:43 UTC │
	│ addons  │ enable dashboard -p embed-certs-811173 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-811173           │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:43 UTC │
	│ start   │ -p embed-certs-811173 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-811173           │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-167150 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-167150 │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:43 UTC │
	│ start   │ -p default-k8s-diff-port-167150 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-167150 │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-837474 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-837474            │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:43 UTC │
	│ start   │ -p no-preload-837474 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-837474            │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │                     │
	│ image   │ old-k8s-version-987573 image list --format=json                                                                                                                                                                                               │ old-k8s-version-987573       │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │ 19 Nov 25 02:44 UTC │
	│ pause   │ -p old-k8s-version-987573 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-987573       │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 02:43:55
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 02:43:55.428473  322722 out.go:360] Setting OutFile to fd 1 ...
	I1119 02:43:55.428738  322722 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:43:55.428748  322722 out.go:374] Setting ErrFile to fd 2...
	I1119 02:43:55.428752  322722 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:43:55.428986  322722 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-11126/.minikube/bin
	I1119 02:43:55.429462  322722 out.go:368] Setting JSON to false
	I1119 02:43:55.430538  322722 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5182,"bootTime":1763515053,"procs":301,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 02:43:55.430632  322722 start.go:143] virtualization: kvm guest
	I1119 02:43:55.432528  322722 out.go:179] * [no-preload-837474] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1119 02:43:55.433950  322722 out.go:179]   - MINIKUBE_LOCATION=21924
	I1119 02:43:55.433980  322722 notify.go:221] Checking for updates...
	I1119 02:43:55.436001  322722 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 02:43:55.437466  322722 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21924-11126/kubeconfig
	I1119 02:43:55.438636  322722 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-11126/.minikube
	I1119 02:43:55.439931  322722 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1119 02:43:55.441572  322722 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 02:43:54.285873  320707 ssh_runner.go:195] Run: cat /version.json
	I1119 02:43:54.285916  320707 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-811173
	I1119 02:43:54.285934  320707 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 02:43:54.285997  320707 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-811173
	I1119 02:43:54.304991  320707 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/embed-certs-811173/id_rsa Username:docker}
	I1119 02:43:54.305271  320707 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/embed-certs-811173/id_rsa Username:docker}
	I1119 02:43:54.399265  320707 ssh_runner.go:195] Run: systemctl --version
	I1119 02:43:54.465660  320707 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 02:43:54.506992  320707 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 02:43:54.511582  320707 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 02:43:54.511646  320707 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 02:43:54.520206  320707 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1119 02:43:54.520229  320707 start.go:496] detecting cgroup driver to use...
	I1119 02:43:54.520257  320707 detect.go:190] detected "systemd" cgroup driver on host os
	I1119 02:43:54.520315  320707 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 02:43:54.534914  320707 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 02:43:54.547340  320707 docker.go:218] disabling cri-docker service (if available) ...
	I1119 02:43:54.547391  320707 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 02:43:54.561592  320707 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 02:43:54.575624  320707 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 02:43:54.653367  320707 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 02:43:54.757461  320707 docker.go:234] disabling docker service ...
	I1119 02:43:54.757544  320707 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 02:43:54.772667  320707 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 02:43:54.784628  320707 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 02:43:54.877357  320707 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 02:43:54.971404  320707 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 02:43:54.989534  320707 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 02:43:55.003067  320707 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 02:43:55.003125  320707 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:43:55.011984  320707 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1119 02:43:55.012050  320707 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:43:55.020366  320707 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:43:55.028592  320707 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:43:55.037402  320707 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 02:43:55.046391  320707 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:43:55.064425  320707 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:43:55.076092  320707 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:43:55.091206  320707 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 02:43:55.100105  320707 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 02:43:55.111263  320707 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:43:55.221869  320707 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 02:43:55.350118  320707 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 02:43:55.350185  320707 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 02:43:55.354534  320707 start.go:564] Will wait 60s for crictl version
	I1119 02:43:55.354594  320707 ssh_runner.go:195] Run: which crictl
	I1119 02:43:55.358043  320707 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 02:43:55.382945  320707 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1119 02:43:55.383026  320707 ssh_runner.go:195] Run: crio --version
	I1119 02:43:55.415001  320707 ssh_runner.go:195] Run: crio --version
	I1119 02:43:55.448080  320707 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1119 02:43:55.443088  322722 config.go:182] Loaded profile config "no-preload-837474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:43:55.443554  322722 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 02:43:55.468196  322722 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1119 02:43:55.468281  322722 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 02:43:55.527928  322722 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:79 SystemTime:2025-11-19 02:43:55.516761816 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652060160 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 02:43:55.528029  322722 docker.go:319] overlay module found
	I1119 02:43:55.529631  322722 out.go:179] * Using the docker driver based on existing profile
	I1119 02:43:55.530733  322722 start.go:309] selected driver: docker
	I1119 02:43:55.530744  322722 start.go:930] validating driver "docker" against &{Name:no-preload-837474 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-837474 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:43:55.530824  322722 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 02:43:55.531356  322722 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 02:43:55.591349  322722 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:79 SystemTime:2025-11-19 02:43:55.581212886 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652060160 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 02:43:55.591729  322722 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 02:43:55.591766  322722 cni.go:84] Creating CNI manager for ""
	I1119 02:43:55.591822  322722 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 02:43:55.591869  322722 start.go:353] cluster config:
	{Name:no-preload-837474 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-837474 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:43:55.597458  322722 out.go:179] * Starting "no-preload-837474" primary control-plane node in "no-preload-837474" cluster
	I1119 02:43:55.600595  322722 cache.go:134] Beginning downloading kic base image for docker with crio
	I1119 02:43:55.601763  322722 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1119 02:43:55.602926  322722 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 02:43:55.603020  322722 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1119 02:43:55.603052  322722 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/config.json ...
	I1119 02:43:55.603284  322722 cache.go:107] acquiring lock: {Name:mk0b4a5ed1b254b5d61172b3c33fc894da77be9f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 02:43:55.603313  322722 cache.go:107] acquiring lock: {Name:mkddee0277675ded6b2e43d9db23318e5b303890 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 02:43:55.603418  322722 cache.go:115] /home/jenkins/minikube-integration/21924-11126/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1119 02:43:55.603394  322722 cache.go:107] acquiring lock: {Name:mk52ee23ebdd5f1abc2a7e417a2896e8538de4dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 02:43:55.603450  322722 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21924-11126/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 164.772µs
	I1119 02:43:55.603468  322722 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21924-11126/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1119 02:43:55.603465  322722 cache.go:115] /home/jenkins/minikube-integration/21924-11126/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1119 02:43:55.603460  322722 cache.go:107] acquiring lock: {Name:mk8b71fab168cd41fe90be16c8f6c892544feb60 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 02:43:55.603485  322722 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21924-11126/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 191.551µs
	I1119 02:43:55.603496  322722 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21924-11126/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1119 02:43:55.603455  322722 cache.go:107] acquiring lock: {Name:mk58f7adfb29feece603cf6d9222a90ab24abc38 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 02:43:55.603305  322722 cache.go:107] acquiring lock: {Name:mk1acaa7e17abb35c0a1b36f8014c55ac138b78f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 02:43:55.603530  322722 cache.go:115] /home/jenkins/minikube-integration/21924-11126/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1119 02:43:55.603527  322722 cache.go:115] /home/jenkins/minikube-integration/21924-11126/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1119 02:43:55.603546  322722 cache.go:115] /home/jenkins/minikube-integration/21924-11126/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1119 02:43:55.603540  322722 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21924-11126/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 192.592µs
	I1119 02:43:55.603554  322722 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21924-11126/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 260.582µs
	I1119 02:43:55.603542  322722 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21924-11126/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 85.38µs
	I1119 02:43:55.603559  322722 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21924-11126/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1119 02:43:55.603563  322722 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21924-11126/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1119 02:43:55.603565  322722 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21924-11126/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1119 02:43:55.603289  322722 cache.go:107] acquiring lock: {Name:mkcbdda5a2c225a14d113eb60bf1b63a9f7af468 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 02:43:55.603571  322722 cache.go:115] /home/jenkins/minikube-integration/21924-11126/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1119 02:43:55.603562  322722 cache.go:107] acquiring lock: {Name:mk789e792a18684b36279333d1d2a3790dd7ce3a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 02:43:55.603625  322722 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21924-11126/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 225.502µs
	I1119 02:43:55.603641  322722 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21924-11126/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1119 02:43:55.603641  322722 cache.go:115] /home/jenkins/minikube-integration/21924-11126/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1119 02:43:55.603650  322722 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21924-11126/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 376.33µs
	I1119 02:43:55.603662  322722 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21924-11126/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1119 02:43:55.603716  322722 cache.go:115] /home/jenkins/minikube-integration/21924-11126/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1119 02:43:55.603731  322722 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21924-11126/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 224.573µs
	I1119 02:43:55.603752  322722 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21924-11126/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1119 02:43:55.603767  322722 cache.go:87] Successfully saved all images to host disk.
	I1119 02:43:55.623230  322722 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1119 02:43:55.623252  322722 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1119 02:43:55.623272  322722 cache.go:243] Successfully downloaded all kic artifacts
	I1119 02:43:55.623300  322722 start.go:360] acquireMachinesLock for no-preload-837474: {Name:mk39987c4e02a0b7f1a15807d776065c6d095ec8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 02:43:55.623357  322722 start.go:364] duration metric: took 38.318µs to acquireMachinesLock for "no-preload-837474"
	I1119 02:43:55.623378  322722 start.go:96] Skipping create...Using existing machine configuration
	I1119 02:43:55.623388  322722 fix.go:54] fixHost starting: 
	I1119 02:43:55.623807  322722 cli_runner.go:164] Run: docker container inspect no-preload-837474 --format={{.State.Status}}
	I1119 02:43:55.640610  322722 fix.go:112] recreateIfNeeded on no-preload-837474: state=Stopped err=<nil>
	W1119 02:43:55.640633  322722 fix.go:138] unexpected machine state, will restart: <nil>
	I1119 02:43:55.449157  320707 cli_runner.go:164] Run: docker network inspect embed-certs-811173 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 02:43:55.469701  320707 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1119 02:43:55.474249  320707 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 02:43:55.485867  320707 kubeadm.go:884] updating cluster {Name:embed-certs-811173 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-811173 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 02:43:55.486009  320707 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 02:43:55.486065  320707 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 02:43:55.525836  320707 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 02:43:55.525855  320707 crio.go:433] Images already preloaded, skipping extraction
	I1119 02:43:55.525897  320707 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 02:43:55.553608  320707 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 02:43:55.553634  320707 cache_images.go:86] Images are preloaded, skipping loading
	I1119 02:43:55.553644  320707 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1119 02:43:55.553765  320707 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-811173 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-811173 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 02:43:55.553843  320707 ssh_runner.go:195] Run: crio config
	I1119 02:43:55.602252  320707 cni.go:84] Creating CNI manager for ""
	I1119 02:43:55.602274  320707 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 02:43:55.602288  320707 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 02:43:55.602309  320707 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-811173 NodeName:embed-certs-811173 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 02:43:55.602445  320707 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-811173"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 02:43:55.602516  320707 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 02:43:55.611132  320707 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 02:43:55.611188  320707 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 02:43:55.619384  320707 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1119 02:43:55.632653  320707 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 02:43:55.645483  320707 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1119 02:43:55.658251  320707 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1119 02:43:55.662538  320707 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 02:43:55.672971  320707 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:43:55.755627  320707 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 02:43:55.787084  320707 certs.go:69] Setting up /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173 for IP: 192.168.85.2
	I1119 02:43:55.787108  320707 certs.go:195] generating shared ca certs ...
	I1119 02:43:55.787129  320707 certs.go:227] acquiring lock for ca certs: {Name:mkfa94aafd627ee4c1a185e05aa520339d3c22d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:43:55.787288  320707 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21924-11126/.minikube/ca.key
	I1119 02:43:55.787339  320707 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21924-11126/.minikube/proxy-client-ca.key
	I1119 02:43:55.787349  320707 certs.go:257] generating profile certs ...
	I1119 02:43:55.787497  320707 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/client.key
	I1119 02:43:55.787571  320707 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/apiserver.key.a0a915e4
	I1119 02:43:55.787627  320707 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/proxy-client.key
	I1119 02:43:55.787764  320707 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/14634.pem (1338 bytes)
	W1119 02:43:55.787816  320707 certs.go:480] ignoring /home/jenkins/minikube-integration/21924-11126/.minikube/certs/14634_empty.pem, impossibly tiny 0 bytes
	I1119 02:43:55.787831  320707 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca-key.pem (1671 bytes)
	I1119 02:43:55.787865  320707 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem (1082 bytes)
	I1119 02:43:55.787898  320707 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/cert.pem (1123 bytes)
	I1119 02:43:55.787928  320707 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/key.pem (1675 bytes)
	I1119 02:43:55.787995  320707 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/files/etc/ssl/certs/146342.pem (1708 bytes)
	I1119 02:43:55.788908  320707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 02:43:55.814136  320707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1119 02:43:55.841341  320707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 02:43:55.862047  320707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1119 02:43:55.888515  320707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1119 02:43:55.906292  320707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1119 02:43:55.923009  320707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 02:43:55.940508  320707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1119 02:43:55.957798  320707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 02:43:55.977304  320707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/certs/14634.pem --> /usr/share/ca-certificates/14634.pem (1338 bytes)
	I1119 02:43:55.998424  320707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/files/etc/ssl/certs/146342.pem --> /usr/share/ca-certificates/146342.pem (1708 bytes)
	I1119 02:43:56.018715  320707 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 02:43:56.031184  320707 ssh_runner.go:195] Run: openssl version
	I1119 02:43:56.037204  320707 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 02:43:56.045741  320707 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:43:56.049423  320707 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 01:56 /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:43:56.049479  320707 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:43:56.085397  320707 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 02:43:56.093019  320707 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14634.pem && ln -fs /usr/share/ca-certificates/14634.pem /etc/ssl/certs/14634.pem"
	I1119 02:43:56.104553  320707 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14634.pem
	I1119 02:43:56.110028  320707 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 02:02 /usr/share/ca-certificates/14634.pem
	I1119 02:43:56.110076  320707 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14634.pem
	I1119 02:43:56.144840  320707 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14634.pem /etc/ssl/certs/51391683.0"
	I1119 02:43:56.152756  320707 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/146342.pem && ln -fs /usr/share/ca-certificates/146342.pem /etc/ssl/certs/146342.pem"
	I1119 02:43:56.160727  320707 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/146342.pem
	I1119 02:43:56.164354  320707 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 02:02 /usr/share/ca-certificates/146342.pem
	I1119 02:43:56.164395  320707 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/146342.pem
	I1119 02:43:56.208726  320707 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/146342.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 02:43:56.216934  320707 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 02:43:56.221328  320707 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1119 02:43:56.266359  320707 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1119 02:43:56.306134  320707 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1119 02:43:56.356664  320707 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1119 02:43:56.411481  320707 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1119 02:43:56.469333  320707 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1119 02:43:56.506095  320707 kubeadm.go:401] StartCluster: {Name:embed-certs-811173 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-811173 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:43:56.506191  320707 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 02:43:56.506243  320707 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 02:43:56.538459  320707 cri.go:89] found id: "e0994ea94767873e5f7aa16af71ef5155fc15391a563da35948cadb1520f80bd"
	I1119 02:43:56.538482  320707 cri.go:89] found id: "b9603bf135a48a7fd7f1a7df00bc5ac2ca325854631a2e9109eebbe9c579c3fc"
	I1119 02:43:56.538487  320707 cri.go:89] found id: "05974f8fe2ed9b3af8b149d271de0fd120542bca0e181f00cc290f0684748003"
	I1119 02:43:56.538490  320707 cri.go:89] found id: "706b2dbda2d38ebc2ca3e61f6b17e96a3d75c375c204a2bcebbf88ede678a129"
	I1119 02:43:56.538494  320707 cri.go:89] found id: ""
	I1119 02:43:56.538542  320707 ssh_runner.go:195] Run: sudo runc list -f json
	W1119 02:43:56.550639  320707 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:43:56Z" level=error msg="open /run/runc: no such file or directory"
	I1119 02:43:56.550691  320707 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 02:43:56.558911  320707 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1119 02:43:56.558927  320707 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1119 02:43:56.558968  320707 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1119 02:43:56.566586  320707 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1119 02:43:56.567107  320707 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-811173" does not appear in /home/jenkins/minikube-integration/21924-11126/kubeconfig
	I1119 02:43:56.567372  320707 kubeconfig.go:62] /home/jenkins/minikube-integration/21924-11126/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-811173" cluster setting kubeconfig missing "embed-certs-811173" context setting]
	I1119 02:43:56.567809  320707 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/kubeconfig: {Name:mkb0d0ec188d51add3fa67ae624c9c11581068d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:43:56.569379  320707 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1119 02:43:56.577463  320707 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1119 02:43:56.577490  320707 kubeadm.go:602] duration metric: took 18.556909ms to restartPrimaryControlPlane
	I1119 02:43:56.577499  320707 kubeadm.go:403] duration metric: took 71.414046ms to StartCluster
	I1119 02:43:56.577514  320707 settings.go:142] acquiring lock: {Name:mk39bd61177f2f8808b442f427cccc3032092975 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:43:56.577579  320707 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21924-11126/kubeconfig
	I1119 02:43:56.579038  320707 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/kubeconfig: {Name:mkb0d0ec188d51add3fa67ae624c9c11581068d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:43:56.579269  320707 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 02:43:56.579330  320707 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 02:43:56.579439  320707 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-811173"
	I1119 02:43:56.579458  320707 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-811173"
	W1119 02:43:56.579466  320707 addons.go:248] addon storage-provisioner should already be in state true
	I1119 02:43:56.579465  320707 addons.go:70] Setting dashboard=true in profile "embed-certs-811173"
	I1119 02:43:56.579488  320707 addons.go:239] Setting addon dashboard=true in "embed-certs-811173"
	I1119 02:43:56.579494  320707 host.go:66] Checking if "embed-certs-811173" exists ...
	W1119 02:43:56.579499  320707 addons.go:248] addon dashboard should already be in state true
	I1119 02:43:56.579516  320707 config.go:182] Loaded profile config "embed-certs-811173": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:43:56.579533  320707 host.go:66] Checking if "embed-certs-811173" exists ...
	I1119 02:43:56.579563  320707 addons.go:70] Setting default-storageclass=true in profile "embed-certs-811173"
	I1119 02:43:56.579580  320707 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-811173"
	I1119 02:43:56.579846  320707 cli_runner.go:164] Run: docker container inspect embed-certs-811173 --format={{.State.Status}}
	I1119 02:43:56.579978  320707 cli_runner.go:164] Run: docker container inspect embed-certs-811173 --format={{.State.Status}}
	I1119 02:43:56.580011  320707 cli_runner.go:164] Run: docker container inspect embed-certs-811173 --format={{.State.Status}}
	I1119 02:43:56.581204  320707 out.go:179] * Verifying Kubernetes components...
	I1119 02:43:56.582446  320707 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:43:56.605762  320707 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 02:43:56.605810  320707 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1119 02:43:56.606576  320707 addons.go:239] Setting addon default-storageclass=true in "embed-certs-811173"
	W1119 02:43:56.606598  320707 addons.go:248] addon default-storageclass should already be in state true
	I1119 02:43:56.606623  320707 host.go:66] Checking if "embed-certs-811173" exists ...
	I1119 02:43:56.607082  320707 cli_runner.go:164] Run: docker container inspect embed-certs-811173 --format={{.State.Status}}
	I1119 02:43:56.607213  320707 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 02:43:56.607232  320707 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 02:43:56.607294  320707 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-811173
	I1119 02:43:56.610564  320707 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	W1119 02:43:52.207759  317113 pod_ready.go:104] pod "coredns-5dd5756b68-djd8r" is not "Ready", error: node "old-k8s-version-987573" hosting pod "coredns-5dd5756b68-djd8r" is not "Ready" (will retry)
	W1119 02:43:54.208568  317113 pod_ready.go:104] pod "coredns-5dd5756b68-djd8r" is not "Ready", error: node "old-k8s-version-987573" hosting pod "coredns-5dd5756b68-djd8r" is not "Ready" (will retry)
	I1119 02:43:56.708080  317113 pod_ready.go:94] pod "coredns-5dd5756b68-djd8r" is "Ready"
	I1119 02:43:56.708112  317113 pod_ready.go:86] duration metric: took 9.006187672s for pod "coredns-5dd5756b68-djd8r" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:56.711315  317113 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-987573" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:56.611723  320707 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1119 02:43:56.611741  320707 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1119 02:43:56.611792  320707 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-811173
	I1119 02:43:56.640805  320707 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/embed-certs-811173/id_rsa Username:docker}
	I1119 02:43:56.641296  320707 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 02:43:56.641311  320707 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 02:43:56.641364  320707 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-811173
	I1119 02:43:56.646841  320707 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/embed-certs-811173/id_rsa Username:docker}
	I1119 02:43:56.666259  320707 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/embed-certs-811173/id_rsa Username:docker}
	I1119 02:43:56.726501  320707 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 02:43:56.739974  320707 node_ready.go:35] waiting up to 6m0s for node "embed-certs-811173" to be "Ready" ...
	I1119 02:43:56.750619  320707 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 02:43:56.759982  320707 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1119 02:43:56.760032  320707 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1119 02:43:56.775475  320707 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1119 02:43:56.775497  320707 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1119 02:43:56.777383  320707 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 02:43:56.793034  320707 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1119 02:43:56.793054  320707 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1119 02:43:56.810675  320707 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1119 02:43:56.810695  320707 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1119 02:43:56.827797  320707 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1119 02:43:56.827818  320707 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1119 02:43:56.841837  320707 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1119 02:43:56.841863  320707 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1119 02:43:56.854192  320707 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1119 02:43:56.854211  320707 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1119 02:43:56.866106  320707 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1119 02:43:56.866123  320707 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1119 02:43:56.877864  320707 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1119 02:43:56.877884  320707 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1119 02:43:56.889684  320707 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1119 02:43:58.034685  320707 node_ready.go:49] node "embed-certs-811173" is "Ready"
	I1119 02:43:58.034733  320707 node_ready.go:38] duration metric: took 1.294722255s for node "embed-certs-811173" to be "Ready" ...
	I1119 02:43:58.034751  320707 api_server.go:52] waiting for apiserver process to appear ...
	I1119 02:43:58.034822  320707 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 02:43:58.589508  320707 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.83885118s)
	I1119 02:43:58.589595  320707 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.812177747s)
	I1119 02:43:58.589720  320707 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.700002287s)
	I1119 02:43:58.589745  320707 api_server.go:72] duration metric: took 2.010447222s to wait for apiserver process to appear ...
	I1119 02:43:58.589761  320707 api_server.go:88] waiting for apiserver healthz status ...
	I1119 02:43:58.589781  320707 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1119 02:43:58.591316  320707 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-811173 addons enable metrics-server
	
	I1119 02:43:58.597926  320707 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 02:43:58.597955  320707 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 02:43:58.604106  320707 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1119 02:43:58.605045  320707 addons.go:515] duration metric: took 2.02572569s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1119 02:43:59.090526  320707 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1119 02:43:59.095309  320707 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 02:43:59.095340  320707 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 02:43:54.544732  321785 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-167150" ...
	I1119 02:43:54.544793  321785 cli_runner.go:164] Run: docker start default-k8s-diff-port-167150
	I1119 02:43:54.848898  321785 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-167150 --format={{.State.Status}}
	I1119 02:43:54.869258  321785 kic.go:430] container "default-k8s-diff-port-167150" state is running.
	I1119 02:43:54.869657  321785 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-167150
	I1119 02:43:54.889310  321785 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/config.json ...
	I1119 02:43:54.889538  321785 machine.go:94] provisionDockerMachine start ...
	I1119 02:43:54.889606  321785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-167150
	I1119 02:43:54.914941  321785 main.go:143] libmachine: Using SSH client type: native
	I1119 02:43:54.915282  321785 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1119 02:43:54.915302  321785 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 02:43:54.916085  321785 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45204->127.0.0.1:33118: read: connection reset by peer
	I1119 02:43:58.068557  321785 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-167150
	
	I1119 02:43:58.068611  321785 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-167150"
	I1119 02:43:58.068771  321785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-167150
	I1119 02:43:58.093829  321785 main.go:143] libmachine: Using SSH client type: native
	I1119 02:43:58.094063  321785 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1119 02:43:58.094073  321785 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-167150 && echo "default-k8s-diff-port-167150" | sudo tee /etc/hostname
	I1119 02:43:58.245203  321785 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-167150
	
	I1119 02:43:58.245283  321785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-167150
	I1119 02:43:58.267476  321785 main.go:143] libmachine: Using SSH client type: native
	I1119 02:43:58.267925  321785 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1119 02:43:58.267962  321785 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-167150' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-167150/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-167150' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 02:43:58.407057  321785 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 02:43:58.407086  321785 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21924-11126/.minikube CaCertPath:/home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21924-11126/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21924-11126/.minikube}
	I1119 02:43:58.407110  321785 ubuntu.go:190] setting up certificates
	I1119 02:43:58.407122  321785 provision.go:84] configureAuth start
	I1119 02:43:58.407196  321785 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-167150
	I1119 02:43:58.428099  321785 provision.go:143] copyHostCerts
	I1119 02:43:58.428188  321785 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-11126/.minikube/ca.pem, removing ...
	I1119 02:43:58.428206  321785 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-11126/.minikube/ca.pem
	I1119 02:43:58.428287  321785 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21924-11126/.minikube/ca.pem (1082 bytes)
	I1119 02:43:58.428413  321785 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-11126/.minikube/cert.pem, removing ...
	I1119 02:43:58.428424  321785 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-11126/.minikube/cert.pem
	I1119 02:43:58.428490  321785 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21924-11126/.minikube/cert.pem (1123 bytes)
	I1119 02:43:58.428621  321785 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-11126/.minikube/key.pem, removing ...
	I1119 02:43:58.428634  321785 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-11126/.minikube/key.pem
	I1119 02:43:58.428686  321785 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21924-11126/.minikube/key.pem (1675 bytes)
	I1119 02:43:58.428792  321785 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21924-11126/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-167150 san=[127.0.0.1 192.168.94.2 default-k8s-diff-port-167150 localhost minikube]
	I1119 02:43:58.832957  321785 provision.go:177] copyRemoteCerts
	I1119 02:43:58.833014  321785 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 02:43:58.833055  321785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-167150
	I1119 02:43:58.850615  321785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/default-k8s-diff-port-167150/id_rsa Username:docker}
	I1119 02:43:58.951401  321785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1119 02:43:58.971768  321785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1119 02:43:58.989607  321785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1119 02:43:59.005852  321785 provision.go:87] duration metric: took 598.718896ms to configureAuth
	I1119 02:43:59.005897  321785 ubuntu.go:206] setting minikube options for container-runtime
	I1119 02:43:59.006096  321785 config.go:182] Loaded profile config "default-k8s-diff-port-167150": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:43:59.006211  321785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-167150
	I1119 02:43:59.026841  321785 main.go:143] libmachine: Using SSH client type: native
	I1119 02:43:59.027142  321785 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1119 02:43:59.027169  321785 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 02:43:55.643058  322722 out.go:252] * Restarting existing docker container for "no-preload-837474" ...
	I1119 02:43:55.643109  322722 cli_runner.go:164] Run: docker start no-preload-837474
	I1119 02:43:55.963827  322722 cli_runner.go:164] Run: docker container inspect no-preload-837474 --format={{.State.Status}}
	I1119 02:43:55.984623  322722 kic.go:430] container "no-preload-837474" state is running.
	I1119 02:43:55.985077  322722 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-837474
	I1119 02:43:56.004259  322722 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/config.json ...
	I1119 02:43:56.004477  322722 machine.go:94] provisionDockerMachine start ...
	I1119 02:43:56.004558  322722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-837474
	I1119 02:43:56.023538  322722 main.go:143] libmachine: Using SSH client type: native
	I1119 02:43:56.023843  322722 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1119 02:43:56.023870  322722 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 02:43:56.024501  322722 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35268->127.0.0.1:33123: read: connection reset by peer
	I1119 02:43:59.168317  322722 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-837474
	
	I1119 02:43:59.168363  322722 ubuntu.go:182] provisioning hostname "no-preload-837474"
	I1119 02:43:59.168426  322722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-837474
	I1119 02:43:59.193581  322722 main.go:143] libmachine: Using SSH client type: native
	I1119 02:43:59.193913  322722 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1119 02:43:59.193929  322722 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-837474 && echo "no-preload-837474" | sudo tee /etc/hostname
	I1119 02:43:59.358400  322722 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-837474
	
	I1119 02:43:59.358511  322722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-837474
	I1119 02:43:59.379495  322722 main.go:143] libmachine: Using SSH client type: native
	I1119 02:43:59.379699  322722 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1119 02:43:59.379723  322722 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-837474' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-837474/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-837474' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 02:43:59.514602  322722 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 02:43:59.514631  322722 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21924-11126/.minikube CaCertPath:/home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21924-11126/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21924-11126/.minikube}
	I1119 02:43:59.514653  322722 ubuntu.go:190] setting up certificates
	I1119 02:43:59.514666  322722 provision.go:84] configureAuth start
	I1119 02:43:59.514742  322722 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-837474
	I1119 02:43:59.533401  322722 provision.go:143] copyHostCerts
	I1119 02:43:59.533471  322722 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-11126/.minikube/ca.pem, removing ...
	I1119 02:43:59.533484  322722 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-11126/.minikube/ca.pem
	I1119 02:43:59.533561  322722 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21924-11126/.minikube/ca.pem (1082 bytes)
	I1119 02:43:59.533699  322722 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-11126/.minikube/cert.pem, removing ...
	I1119 02:43:59.533710  322722 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-11126/.minikube/cert.pem
	I1119 02:43:59.533751  322722 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21924-11126/.minikube/cert.pem (1123 bytes)
	I1119 02:43:59.533851  322722 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-11126/.minikube/key.pem, removing ...
	I1119 02:43:59.533862  322722 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-11126/.minikube/key.pem
	I1119 02:43:59.533895  322722 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21924-11126/.minikube/key.pem (1675 bytes)
	I1119 02:43:59.533979  322722 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21924-11126/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca-key.pem org=jenkins.no-preload-837474 san=[127.0.0.1 192.168.103.2 localhost minikube no-preload-837474]
	I1119 02:43:59.731844  322722 provision.go:177] copyRemoteCerts
	I1119 02:43:59.731935  322722 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 02:43:59.731990  322722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-837474
	I1119 02:43:59.756868  322722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/no-preload-837474/id_rsa Username:docker}
	I1119 02:43:59.859188  322722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1119 02:43:59.876559  322722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1119 02:43:59.894074  322722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1119 02:43:59.910722  322722 provision.go:87] duration metric: took 396.042204ms to configureAuth
	I1119 02:43:59.910750  322722 ubuntu.go:206] setting minikube options for container-runtime
	I1119 02:43:59.910921  322722 config.go:182] Loaded profile config "no-preload-837474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:43:59.911020  322722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-837474
	I1119 02:43:59.934165  322722 main.go:143] libmachine: Using SSH client type: native
	I1119 02:43:59.934501  322722 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1119 02:43:59.934528  322722 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 02:44:00.278168  322722 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 02:44:00.278195  322722 machine.go:97] duration metric: took 4.273701801s to provisionDockerMachine
	I1119 02:44:00.278208  322722 start.go:293] postStartSetup for "no-preload-837474" (driver="docker")
	I1119 02:44:00.278221  322722 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 02:44:00.278294  322722 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 02:44:00.278343  322722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-837474
	I1119 02:44:00.305893  322722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/no-preload-837474/id_rsa Username:docker}
	I1119 02:44:00.400223  322722 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 02:44:00.403578  322722 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 02:44:00.403608  322722 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 02:44:00.403620  322722 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-11126/.minikube/addons for local assets ...
	I1119 02:44:00.403673  322722 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-11126/.minikube/files for local assets ...
	I1119 02:44:00.403766  322722 filesync.go:149] local asset: /home/jenkins/minikube-integration/21924-11126/.minikube/files/etc/ssl/certs/146342.pem -> 146342.pem in /etc/ssl/certs
	I1119 02:44:00.403884  322722 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 02:44:00.411198  322722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/files/etc/ssl/certs/146342.pem --> /etc/ssl/certs/146342.pem (1708 bytes)
	I1119 02:44:00.428413  322722 start.go:296] duration metric: took 150.192775ms for postStartSetup
	I1119 02:44:00.428491  322722 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 02:44:00.428524  322722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-837474
	I1119 02:43:59.399187  321785 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 02:43:59.399216  321785 machine.go:97] duration metric: took 4.509661678s to provisionDockerMachine
	I1119 02:43:59.399231  321785 start.go:293] postStartSetup for "default-k8s-diff-port-167150" (driver="docker")
	I1119 02:43:59.399245  321785 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 02:43:59.399309  321785 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 02:43:59.399358  321785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-167150
	I1119 02:43:59.418696  321785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/default-k8s-diff-port-167150/id_rsa Username:docker}
	I1119 02:43:59.517864  321785 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 02:43:59.521401  321785 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 02:43:59.521462  321785 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 02:43:59.521476  321785 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-11126/.minikube/addons for local assets ...
	I1119 02:43:59.521529  321785 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-11126/.minikube/files for local assets ...
	I1119 02:43:59.521628  321785 filesync.go:149] local asset: /home/jenkins/minikube-integration/21924-11126/.minikube/files/etc/ssl/certs/146342.pem -> 146342.pem in /etc/ssl/certs
	I1119 02:43:59.521736  321785 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 02:43:59.529278  321785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/files/etc/ssl/certs/146342.pem --> /etc/ssl/certs/146342.pem (1708 bytes)
	I1119 02:43:59.548188  321785 start.go:296] duration metric: took 148.944112ms for postStartSetup
	I1119 02:43:59.548256  321785 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 02:43:59.548301  321785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-167150
	I1119 02:43:59.568911  321785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/default-k8s-diff-port-167150/id_rsa Username:docker}
	I1119 02:43:59.661301  321785 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 02:43:59.665639  321785 fix.go:56] duration metric: took 5.140061101s for fixHost
	I1119 02:43:59.665665  321785 start.go:83] releasing machines lock for "default-k8s-diff-port-167150", held for 5.140105804s
	I1119 02:43:59.665730  321785 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-167150
	I1119 02:43:59.688059  321785 ssh_runner.go:195] Run: cat /version.json
	I1119 02:43:59.688100  321785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-167150
	I1119 02:43:59.688149  321785 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 02:43:59.688215  321785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-167150
	I1119 02:43:59.708829  321785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/default-k8s-diff-port-167150/id_rsa Username:docker}
	I1119 02:43:59.709773  321785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/default-k8s-diff-port-167150/id_rsa Username:docker}
	I1119 02:43:59.806678  321785 ssh_runner.go:195] Run: systemctl --version
	I1119 02:43:59.860389  321785 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 02:43:59.894335  321785 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 02:43:59.898990  321785 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 02:43:59.899045  321785 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 02:43:59.906344  321785 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1119 02:43:59.906364  321785 start.go:496] detecting cgroup driver to use...
	I1119 02:43:59.906390  321785 detect.go:190] detected "systemd" cgroup driver on host os
	I1119 02:43:59.906453  321785 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 02:43:59.923664  321785 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 02:43:59.938620  321785 docker.go:218] disabling cri-docker service (if available) ...
	I1119 02:43:59.938684  321785 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 02:43:59.954745  321785 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 02:43:59.969039  321785 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 02:44:00.056088  321785 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 02:44:00.155244  321785 docker.go:234] disabling docker service ...
	I1119 02:44:00.155300  321785 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 02:44:00.173143  321785 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 02:44:00.187714  321785 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 02:44:00.270749  321785 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 02:44:00.353802  321785 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 02:44:00.365520  321785 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 02:44:00.379891  321785 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 02:44:00.379949  321785 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:44:00.388308  321785 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1119 02:44:00.388357  321785 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:44:00.396576  321785 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:44:00.405318  321785 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:44:00.413548  321785 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 02:44:00.421252  321785 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:44:00.431408  321785 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:44:00.439519  321785 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:44:00.448740  321785 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 02:44:00.455638  321785 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 02:44:00.462608  321785 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:44:00.544149  321785 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 02:44:00.683768  321785 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 02:44:00.683840  321785 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 02:44:00.688133  321785 start.go:564] Will wait 60s for crictl version
	I1119 02:44:00.688189  321785 ssh_runner.go:195] Run: which crictl
	I1119 02:44:00.692065  321785 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 02:44:00.716044  321785 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1119 02:44:00.716120  321785 ssh_runner.go:195] Run: crio --version
	I1119 02:44:00.745396  321785 ssh_runner.go:195] Run: crio --version
	I1119 02:44:00.775063  321785 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1119 02:44:00.447787  322722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/no-preload-837474/id_rsa Username:docker}
	I1119 02:44:00.543664  322722 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 02:44:00.548285  322722 fix.go:56] duration metric: took 4.92489252s for fixHost
	I1119 02:44:00.548311  322722 start.go:83] releasing machines lock for "no-preload-837474", held for 4.924940102s
	I1119 02:44:00.548377  322722 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-837474
	I1119 02:44:00.567418  322722 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 02:44:00.567508  322722 ssh_runner.go:195] Run: cat /version.json
	I1119 02:44:00.567548  322722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-837474
	I1119 02:44:00.567563  322722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-837474
	I1119 02:44:00.587014  322722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/no-preload-837474/id_rsa Username:docker}
	I1119 02:44:00.588024  322722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/no-preload-837474/id_rsa Username:docker}
	I1119 02:44:00.681657  322722 ssh_runner.go:195] Run: systemctl --version
	I1119 02:44:00.742242  322722 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 02:44:00.778533  322722 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 02:44:00.783844  322722 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 02:44:00.783902  322722 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 02:44:00.792717  322722 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1119 02:44:00.792741  322722 start.go:496] detecting cgroup driver to use...
	I1119 02:44:00.792774  322722 detect.go:190] detected "systemd" cgroup driver on host os
	I1119 02:44:00.792822  322722 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 02:44:00.808286  322722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 02:44:00.820699  322722 docker.go:218] disabling cri-docker service (if available) ...
	I1119 02:44:00.820754  322722 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 02:44:00.838152  322722 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 02:44:00.853523  322722 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 02:44:00.961698  322722 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 02:44:01.076515  322722 docker.go:234] disabling docker service ...
	I1119 02:44:01.076582  322722 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 02:44:01.095571  322722 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 02:44:01.108321  322722 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 02:44:01.208624  322722 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 02:44:01.330008  322722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 02:44:01.345790  322722 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 02:44:01.361656  322722 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 02:44:01.361714  322722 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:44:01.370790  322722 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1119 02:44:01.370846  322722 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:44:01.380584  322722 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:44:01.390006  322722 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:44:01.399802  322722 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 02:44:01.408671  322722 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:44:01.419067  322722 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:44:01.428291  322722 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:44:01.437373  322722 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 02:44:01.444629  322722 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 02:44:01.452824  322722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:44:01.552680  322722 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 02:44:01.734287  322722 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 02:44:01.734362  322722 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 02:44:01.740011  322722 start.go:564] Will wait 60s for crictl version
	I1119 02:44:01.740146  322722 ssh_runner.go:195] Run: which crictl
	I1119 02:44:01.744551  322722 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 02:44:01.773153  322722 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1119 02:44:01.773234  322722 ssh_runner.go:195] Run: crio --version
	I1119 02:44:01.817082  322722 ssh_runner.go:195] Run: crio --version
	I1119 02:44:01.864750  322722 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	W1119 02:43:58.715760  317113 pod_ready.go:104] pod "etcd-old-k8s-version-987573" is not "Ready", error: <nil>
	W1119 02:44:00.717005  317113 pod_ready.go:104] pod "etcd-old-k8s-version-987573" is not "Ready", error: <nil>
	I1119 02:44:01.218344  317113 pod_ready.go:94] pod "etcd-old-k8s-version-987573" is "Ready"
	I1119 02:44:01.218388  317113 pod_ready.go:86] duration metric: took 4.507040325s for pod "etcd-old-k8s-version-987573" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:01.223345  317113 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-987573" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:01.229114  317113 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-987573" is "Ready"
	I1119 02:44:01.229141  317113 pod_ready.go:86] duration metric: took 5.771752ms for pod "kube-apiserver-old-k8s-version-987573" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:01.232611  317113 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-987573" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:01.237839  317113 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-987573" is "Ready"
	I1119 02:44:01.237862  317113 pod_ready.go:86] duration metric: took 5.22401ms for pod "kube-controller-manager-old-k8s-version-987573" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:01.241010  317113 pod_ready.go:83] waiting for pod "kube-proxy-tmqhk" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:01.415680  317113 pod_ready.go:94] pod "kube-proxy-tmqhk" is "Ready"
	I1119 02:44:01.415706  317113 pod_ready.go:86] duration metric: took 174.671754ms for pod "kube-proxy-tmqhk" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:01.615862  317113 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-987573" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:00.776476  321785 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-167150 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 02:44:00.796310  321785 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1119 02:44:00.800290  321785 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 02:44:00.810096  321785 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-167150 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-167150 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 02:44:00.810253  321785 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 02:44:00.810315  321785 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 02:44:00.849031  321785 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 02:44:00.849054  321785 crio.go:433] Images already preloaded, skipping extraction
	I1119 02:44:00.849120  321785 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 02:44:00.882209  321785 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 02:44:00.882231  321785 cache_images.go:86] Images are preloaded, skipping loading
	I1119 02:44:00.882239  321785 kubeadm.go:935] updating node { 192.168.94.2 8444 v1.34.1 crio true true} ...
	I1119 02:44:00.882332  321785 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-167150 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-167150 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 02:44:00.882412  321785 ssh_runner.go:195] Run: crio config
	I1119 02:44:00.949078  321785 cni.go:84] Creating CNI manager for ""
	I1119 02:44:00.949109  321785 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 02:44:00.949131  321785 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 02:44:00.949161  321785 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-167150 NodeName:default-k8s-diff-port-167150 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 02:44:00.949340  321785 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-167150"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 02:44:00.949417  321785 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 02:44:00.958676  321785 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 02:44:00.958740  321785 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 02:44:00.967278  321785 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1119 02:44:00.985547  321785 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 02:44:01.006153  321785 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1119 02:44:01.022191  321785 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1119 02:44:01.026556  321785 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 02:44:01.040900  321785 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:44:01.145157  321785 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 02:44:01.178937  321785 certs.go:69] Setting up /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150 for IP: 192.168.94.2
	I1119 02:44:01.178956  321785 certs.go:195] generating shared ca certs ...
	I1119 02:44:01.178986  321785 certs.go:227] acquiring lock for ca certs: {Name:mkfa94aafd627ee4c1a185e05aa520339d3c22d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:44:01.179197  321785 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21924-11126/.minikube/ca.key
	I1119 02:44:01.179258  321785 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21924-11126/.minikube/proxy-client-ca.key
	I1119 02:44:01.179272  321785 certs.go:257] generating profile certs ...
	I1119 02:44:01.179376  321785 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/client.key
	I1119 02:44:01.179478  321785 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/apiserver.key.c3ecd8f4
	I1119 02:44:01.179534  321785 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/proxy-client.key
	I1119 02:44:01.179689  321785 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/14634.pem (1338 bytes)
	W1119 02:44:01.179732  321785 certs.go:480] ignoring /home/jenkins/minikube-integration/21924-11126/.minikube/certs/14634_empty.pem, impossibly tiny 0 bytes
	I1119 02:44:01.179747  321785 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca-key.pem (1671 bytes)
	I1119 02:44:01.179786  321785 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem (1082 bytes)
	I1119 02:44:01.179837  321785 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/cert.pem (1123 bytes)
	I1119 02:44:01.179873  321785 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/key.pem (1675 bytes)
	I1119 02:44:01.179933  321785 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/files/etc/ssl/certs/146342.pem (1708 bytes)
	I1119 02:44:01.180613  321785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 02:44:01.200998  321785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1119 02:44:01.224874  321785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 02:44:01.250349  321785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1119 02:44:01.276007  321785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1119 02:44:01.313169  321785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1119 02:44:01.333954  321785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 02:44:01.357045  321785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1119 02:44:01.374673  321785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 02:44:01.393980  321785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/certs/14634.pem --> /usr/share/ca-certificates/14634.pem (1338 bytes)
	I1119 02:44:01.414242  321785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/files/etc/ssl/certs/146342.pem --> /usr/share/ca-certificates/146342.pem (1708 bytes)
	I1119 02:44:01.434091  321785 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 02:44:01.447113  321785 ssh_runner.go:195] Run: openssl version
	I1119 02:44:01.453537  321785 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 02:44:01.461547  321785 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:44:01.464946  321785 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 01:56 /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:44:01.464992  321785 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:44:01.514904  321785 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 02:44:01.524612  321785 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14634.pem && ln -fs /usr/share/ca-certificates/14634.pem /etc/ssl/certs/14634.pem"
	I1119 02:44:01.535794  321785 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14634.pem
	I1119 02:44:01.540374  321785 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 02:02 /usr/share/ca-certificates/14634.pem
	I1119 02:44:01.540455  321785 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14634.pem
	I1119 02:44:01.595144  321785 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14634.pem /etc/ssl/certs/51391683.0"
	I1119 02:44:01.606044  321785 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/146342.pem && ln -fs /usr/share/ca-certificates/146342.pem /etc/ssl/certs/146342.pem"
	I1119 02:44:01.615659  321785 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/146342.pem
	I1119 02:44:01.620077  321785 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 02:02 /usr/share/ca-certificates/146342.pem
	I1119 02:44:01.620136  321785 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/146342.pem
	I1119 02:44:01.663985  321785 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/146342.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 02:44:01.672722  321785 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 02:44:01.676954  321785 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1119 02:44:01.721257  321785 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1119 02:44:01.771902  321785 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1119 02:44:01.831420  321785 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1119 02:44:01.919067  321785 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1119 02:44:01.979121  321785 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1119 02:44:02.041386  321785 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-167150 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-167150 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:44:02.041507  321785 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 02:44:02.041577  321785 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 02:44:02.087330  321785 cri.go:89] found id: "0850d32773d1729f97e0f3baf42d1b3638a7327abc66f584efafbdaa4334a283"
	I1119 02:44:02.087357  321785 cri.go:89] found id: "299bbab984622e99c9bf240099fd1891299f48da807c2b0ab1553ad4885d7c13"
	I1119 02:44:02.087427  321785 cri.go:89] found id: "7cdb91f63703193832fa8fc84ec766b4d87e2ac3e24887dcbcb074dfdac9634d"
	I1119 02:44:02.087450  321785 cri.go:89] found id: "f308d3728814cf13897a458da3b827483ae71b6a4cf2cb0fd38e141e14586a3e"
	I1119 02:44:02.087455  321785 cri.go:89] found id: ""
	I1119 02:44:02.087501  321785 ssh_runner.go:195] Run: sudo runc list -f json
	W1119 02:44:02.108646  321785 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:44:02Z" level=error msg="open /run/runc: no such file or directory"
	I1119 02:44:02.108711  321785 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 02:44:02.120272  321785 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1119 02:44:02.120289  321785 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1119 02:44:02.120331  321785 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1119 02:44:02.129883  321785 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1119 02:44:02.131275  321785 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-167150" does not appear in /home/jenkins/minikube-integration/21924-11126/kubeconfig
	I1119 02:44:02.132241  321785 kubeconfig.go:62] /home/jenkins/minikube-integration/21924-11126/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-167150" cluster setting kubeconfig missing "default-k8s-diff-port-167150" context setting]
	I1119 02:44:02.133753  321785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/kubeconfig: {Name:mkb0d0ec188d51add3fa67ae624c9c11581068d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:44:02.135970  321785 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1119 02:44:02.146657  321785 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1119 02:44:02.146686  321785 kubeadm.go:602] duration metric: took 26.390987ms to restartPrimaryControlPlane
	I1119 02:44:02.146696  321785 kubeadm.go:403] duration metric: took 105.316163ms to StartCluster
	I1119 02:44:02.146711  321785 settings.go:142] acquiring lock: {Name:mk39bd61177f2f8808b442f427cccc3032092975 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:44:02.146780  321785 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21924-11126/kubeconfig
	I1119 02:44:02.148837  321785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/kubeconfig: {Name:mkb0d0ec188d51add3fa67ae624c9c11581068d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:44:02.149070  321785 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 02:44:02.149418  321785 config.go:182] Loaded profile config "default-k8s-diff-port-167150": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:44:02.149472  321785 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 02:44:02.149544  321785 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-167150"
	I1119 02:44:02.149562  321785 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-167150"
	W1119 02:44:02.149570  321785 addons.go:248] addon storage-provisioner should already be in state true
	I1119 02:44:02.149593  321785 host.go:66] Checking if "default-k8s-diff-port-167150" exists ...
	I1119 02:44:02.149660  321785 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-167150"
	I1119 02:44:02.149672  321785 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-167150"
	W1119 02:44:02.149680  321785 addons.go:248] addon dashboard should already be in state true
	I1119 02:44:02.149698  321785 host.go:66] Checking if "default-k8s-diff-port-167150" exists ...
	I1119 02:44:02.150160  321785 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-167150 --format={{.State.Status}}
	I1119 02:44:02.150222  321785 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-167150"
	I1119 02:44:02.150259  321785 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-167150"
	I1119 02:44:02.150645  321785 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-167150 --format={{.State.Status}}
	I1119 02:44:02.150777  321785 out.go:179] * Verifying Kubernetes components...
	I1119 02:44:02.150805  321785 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-167150 --format={{.State.Status}}
	I1119 02:44:02.152129  321785 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:44:02.184485  321785 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 02:44:02.185093  321785 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-167150"
	W1119 02:44:02.185151  321785 addons.go:248] addon default-storageclass should already be in state true
	I1119 02:44:02.185191  321785 host.go:66] Checking if "default-k8s-diff-port-167150" exists ...
	I1119 02:44:02.185764  321785 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-167150 --format={{.State.Status}}
	I1119 02:44:02.185914  321785 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 02:44:02.185957  321785 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 02:44:02.186036  321785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-167150
	I1119 02:44:02.189009  321785 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1119 02:44:02.191269  321785 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1119 02:44:02.817414  317113 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-987573" is "Ready"
	I1119 02:44:02.817451  317113 pod_ready.go:86] duration metric: took 1.201562035s for pod "kube-scheduler-old-k8s-version-987573" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:02.817465  317113 pod_ready.go:40] duration metric: took 15.119572051s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 02:44:02.876738  317113 start.go:628] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1119 02:44:02.878768  317113 out.go:203] 
	W1119 02:44:02.879972  317113 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1119 02:44:02.881087  317113 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1119 02:44:02.882563  317113 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-987573" cluster and "default" namespace by default
	I1119 02:44:01.865997  322722 cli_runner.go:164] Run: docker network inspect no-preload-837474 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 02:44:01.895022  322722 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1119 02:44:01.901043  322722 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 02:44:01.915792  322722 kubeadm.go:884] updating cluster {Name:no-preload-837474 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-837474 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 02:44:01.915927  322722 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 02:44:01.915971  322722 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 02:44:01.967410  322722 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 02:44:01.967487  322722 cache_images.go:86] Images are preloaded, skipping loading
	I1119 02:44:01.967497  322722 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1119 02:44:01.967631  322722 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-837474 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-837474 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 02:44:01.967712  322722 ssh_runner.go:195] Run: crio config
	I1119 02:44:02.043481  322722 cni.go:84] Creating CNI manager for ""
	I1119 02:44:02.043503  322722 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 02:44:02.043523  322722 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 02:44:02.043551  322722 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-837474 NodeName:no-preload-837474 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 02:44:02.045134  322722 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-837474"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 02:44:02.045275  322722 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 02:44:02.060354  322722 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 02:44:02.060444  322722 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 02:44:02.072326  322722 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1119 02:44:02.096556  322722 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 02:44:02.115337  322722 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1119 02:44:02.133940  322722 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1119 02:44:02.138402  322722 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 02:44:02.155621  322722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:44:02.333032  322722 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 02:44:02.357454  322722 certs.go:69] Setting up /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474 for IP: 192.168.103.2
	I1119 02:44:02.357470  322722 certs.go:195] generating shared ca certs ...
	I1119 02:44:02.357491  322722 certs.go:227] acquiring lock for ca certs: {Name:mkfa94aafd627ee4c1a185e05aa520339d3c22d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:44:02.357768  322722 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21924-11126/.minikube/ca.key
	I1119 02:44:02.357850  322722 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21924-11126/.minikube/proxy-client-ca.key
	I1119 02:44:02.357893  322722 certs.go:257] generating profile certs ...
	I1119 02:44:02.358014  322722 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/client.key
	I1119 02:44:02.358084  322722 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/apiserver.key.2f093449
	I1119 02:44:02.358146  322722 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/proxy-client.key
	I1119 02:44:02.358282  322722 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/14634.pem (1338 bytes)
	W1119 02:44:02.358316  322722 certs.go:480] ignoring /home/jenkins/minikube-integration/21924-11126/.minikube/certs/14634_empty.pem, impossibly tiny 0 bytes
	I1119 02:44:02.358325  322722 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca-key.pem (1671 bytes)
	I1119 02:44:02.358359  322722 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem (1082 bytes)
	I1119 02:44:02.358386  322722 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/cert.pem (1123 bytes)
	I1119 02:44:02.358411  322722 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/key.pem (1675 bytes)
	I1119 02:44:02.358485  322722 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/files/etc/ssl/certs/146342.pem (1708 bytes)
	I1119 02:44:02.359220  322722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 02:44:02.401077  322722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1119 02:44:02.432404  322722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 02:44:02.465534  322722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1119 02:44:02.513238  322722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1119 02:44:02.542343  322722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1671 bytes)
	I1119 02:44:02.568294  322722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 02:44:02.592094  322722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1119 02:44:02.616831  322722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/files/etc/ssl/certs/146342.pem --> /usr/share/ca-certificates/146342.pem (1708 bytes)
	I1119 02:44:02.645202  322722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 02:44:02.671201  322722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/certs/14634.pem --> /usr/share/ca-certificates/14634.pem (1338 bytes)
	I1119 02:44:02.696172  322722 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 02:44:02.713270  322722 ssh_runner.go:195] Run: openssl version
	I1119 02:44:02.721455  322722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/146342.pem && ln -fs /usr/share/ca-certificates/146342.pem /etc/ssl/certs/146342.pem"
	I1119 02:44:02.732989  322722 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/146342.pem
	I1119 02:44:02.737532  322722 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 02:02 /usr/share/ca-certificates/146342.pem
	I1119 02:44:02.737594  322722 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/146342.pem
	I1119 02:44:02.787811  322722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/146342.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 02:44:02.797898  322722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 02:44:02.808647  322722 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:44:02.813294  322722 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 01:56 /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:44:02.813363  322722 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:44:02.874056  322722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 02:44:02.885090  322722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14634.pem && ln -fs /usr/share/ca-certificates/14634.pem /etc/ssl/certs/14634.pem"
	I1119 02:44:02.896167  322722 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14634.pem
	I1119 02:44:02.901774  322722 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 02:02 /usr/share/ca-certificates/14634.pem
	I1119 02:44:02.901827  322722 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14634.pem
	I1119 02:44:02.956405  322722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14634.pem /etc/ssl/certs/51391683.0"
	I1119 02:44:02.967926  322722 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 02:44:02.974058  322722 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1119 02:44:03.043012  322722 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1119 02:44:03.127725  322722 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1119 02:44:03.201611  322722 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1119 02:44:03.270538  322722 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1119 02:44:03.351153  322722 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1119 02:44:03.430088  322722 kubeadm.go:401] StartCluster: {Name:no-preload-837474 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-837474 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:44:03.430191  322722 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 02:44:03.430241  322722 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 02:44:03.494270  322722 cri.go:89] found id: "6dd757954ee069960d7775b0cb8053165f8ed7b87e78e24e092a5d8d6ad8c518"
	I1119 02:44:03.494297  322722 cri.go:89] found id: "348a7baf54addbf4a9c81030950fa886111d02619363237a83c83efe031b6e4e"
	I1119 02:44:03.494303  322722 cri.go:89] found id: "e25eec2afaa5d216ff068aae46bf36572a21229c3f7eba57128ac16e1b16a13a"
	I1119 02:44:03.494307  322722 cri.go:89] found id: "70ad4cd08b245fb372615d7c559ce529ef762f5d44fc541f9bc7000ebd69b651"
	I1119 02:44:03.494311  322722 cri.go:89] found id: ""
	I1119 02:44:03.494359  322722 ssh_runner.go:195] Run: sudo runc list -f json
	W1119 02:44:03.517537  322722 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:44:03Z" level=error msg="open /run/runc: no such file or directory"
	I1119 02:44:03.517606  322722 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 02:44:03.540614  322722 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1119 02:44:03.540637  322722 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1119 02:44:03.540697  322722 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1119 02:44:03.555394  322722 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1119 02:44:03.556662  322722 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-837474" does not appear in /home/jenkins/minikube-integration/21924-11126/kubeconfig
	I1119 02:44:03.557598  322722 kubeconfig.go:62] /home/jenkins/minikube-integration/21924-11126/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-837474" cluster setting kubeconfig missing "no-preload-837474" context setting]
	I1119 02:44:03.559112  322722 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/kubeconfig: {Name:mkb0d0ec188d51add3fa67ae624c9c11581068d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:44:03.562008  322722 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1119 02:44:03.579055  322722 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1119 02:44:03.579085  322722 kubeadm.go:602] duration metric: took 38.44141ms to restartPrimaryControlPlane
	I1119 02:44:03.579095  322722 kubeadm.go:403] duration metric: took 149.013327ms to StartCluster
	I1119 02:44:03.579111  322722 settings.go:142] acquiring lock: {Name:mk39bd61177f2f8808b442f427cccc3032092975 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:44:03.579177  322722 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21924-11126/kubeconfig
	I1119 02:44:03.581637  322722 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/kubeconfig: {Name:mkb0d0ec188d51add3fa67ae624c9c11581068d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:44:03.581917  322722 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 02:44:03.582141  322722 config.go:182] Loaded profile config "no-preload-837474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:44:03.582191  322722 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 02:44:03.582285  322722 addons.go:70] Setting storage-provisioner=true in profile "no-preload-837474"
	I1119 02:44:03.582308  322722 addons.go:239] Setting addon storage-provisioner=true in "no-preload-837474"
	W1119 02:44:03.582316  322722 addons.go:248] addon storage-provisioner should already be in state true
	I1119 02:44:03.582343  322722 host.go:66] Checking if "no-preload-837474" exists ...
	I1119 02:44:03.582867  322722 cli_runner.go:164] Run: docker container inspect no-preload-837474 --format={{.State.Status}}
	I1119 02:44:03.583022  322722 addons.go:70] Setting dashboard=true in profile "no-preload-837474"
	I1119 02:44:03.583041  322722 addons.go:239] Setting addon dashboard=true in "no-preload-837474"
	W1119 02:44:03.583048  322722 addons.go:248] addon dashboard should already be in state true
	I1119 02:44:03.583071  322722 host.go:66] Checking if "no-preload-837474" exists ...
	I1119 02:44:03.583518  322722 cli_runner.go:164] Run: docker container inspect no-preload-837474 --format={{.State.Status}}
	I1119 02:44:03.583680  322722 addons.go:70] Setting default-storageclass=true in profile "no-preload-837474"
	I1119 02:44:03.583700  322722 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-837474"
	I1119 02:44:03.584021  322722 cli_runner.go:164] Run: docker container inspect no-preload-837474 --format={{.State.Status}}
	I1119 02:44:03.587057  322722 out.go:179] * Verifying Kubernetes components...
	I1119 02:44:03.591976  322722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:44:03.619160  322722 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1119 02:44:03.620527  322722 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1119 02:44:03.621590  322722 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1119 02:44:03.621682  322722 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1119 02:44:03.621818  322722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-837474
	I1119 02:44:03.626314  322722 addons.go:239] Setting addon default-storageclass=true in "no-preload-837474"
	W1119 02:44:03.626513  322722 addons.go:248] addon default-storageclass should already be in state true
	I1119 02:44:03.626653  322722 host.go:66] Checking if "no-preload-837474" exists ...
	I1119 02:44:03.627233  322722 cli_runner.go:164] Run: docker container inspect no-preload-837474 --format={{.State.Status}}
	I1119 02:44:03.628068  322722 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 02:43:59.590799  320707 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1119 02:43:59.596406  320707 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1119 02:43:59.597504  320707 api_server.go:141] control plane version: v1.34.1
	I1119 02:43:59.597525  320707 api_server.go:131] duration metric: took 1.007755817s to wait for apiserver health ...
	I1119 02:43:59.597534  320707 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 02:43:59.601152  320707 system_pods.go:59] 8 kube-system pods found
	I1119 02:43:59.601193  320707 system_pods.go:61] "coredns-66bc5c9577-6zqr2" [45763e00-8d07-4cd1-bc77-8131988ad187] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:43:59.601203  320707 system_pods.go:61] "etcd-embed-certs-811173" [aa91bc11-b985-43ed-bb19-226f47adb517] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1119 02:43:59.601217  320707 system_pods.go:61] "kindnet-b2w9g" [0c0429a0-c37c-4eae-befb-d496610e882c] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1119 02:43:59.601225  320707 system_pods.go:61] "kube-apiserver-embed-certs-811173" [85c9fc14-94db-4732-ad9c-53fdb27b0bb5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1119 02:43:59.601236  320707 system_pods.go:61] "kube-controller-manager-embed-certs-811173" [9944e561-0ab9-496d-baac-8b99bf3d6149] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1119 02:43:59.601244  320707 system_pods.go:61] "kube-proxy-s5bzz" [cebbac1b-ff7a-4bdf-b337-ec0b3b320728] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1119 02:43:59.601251  320707 system_pods.go:61] "kube-scheduler-embed-certs-811173" [6c1f1974-3341-47e3-875d-e5ec0abd032c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1119 02:43:59.601262  320707 system_pods.go:61] "storage-provisioner" [4b41d056-28d4-4b4a-b546-2fb8c76fe688] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 02:43:59.601270  320707 system_pods.go:74] duration metric: took 3.728972ms to wait for pod list to return data ...
	I1119 02:43:59.601283  320707 default_sa.go:34] waiting for default service account to be created ...
	I1119 02:43:59.603485  320707 default_sa.go:45] found service account: "default"
	I1119 02:43:59.603506  320707 default_sa.go:55] duration metric: took 2.216158ms for default service account to be created ...
	I1119 02:43:59.603515  320707 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 02:43:59.605857  320707 system_pods.go:86] 8 kube-system pods found
	I1119 02:43:59.605883  320707 system_pods.go:89] "coredns-66bc5c9577-6zqr2" [45763e00-8d07-4cd1-bc77-8131988ad187] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:43:59.605890  320707 system_pods.go:89] "etcd-embed-certs-811173" [aa91bc11-b985-43ed-bb19-226f47adb517] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1119 02:43:59.605899  320707 system_pods.go:89] "kindnet-b2w9g" [0c0429a0-c37c-4eae-befb-d496610e882c] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1119 02:43:59.605905  320707 system_pods.go:89] "kube-apiserver-embed-certs-811173" [85c9fc14-94db-4732-ad9c-53fdb27b0bb5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1119 02:43:59.605913  320707 system_pods.go:89] "kube-controller-manager-embed-certs-811173" [9944e561-0ab9-496d-baac-8b99bf3d6149] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1119 02:43:59.605919  320707 system_pods.go:89] "kube-proxy-s5bzz" [cebbac1b-ff7a-4bdf-b337-ec0b3b320728] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1119 02:43:59.605927  320707 system_pods.go:89] "kube-scheduler-embed-certs-811173" [6c1f1974-3341-47e3-875d-e5ec0abd032c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1119 02:43:59.605932  320707 system_pods.go:89] "storage-provisioner" [4b41d056-28d4-4b4a-b546-2fb8c76fe688] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 02:43:59.605938  320707 system_pods.go:126] duration metric: took 2.416792ms to wait for k8s-apps to be running ...
	I1119 02:43:59.605946  320707 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 02:43:59.605979  320707 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 02:43:59.619349  320707 system_svc.go:56] duration metric: took 13.395645ms WaitForService to wait for kubelet
	I1119 02:43:59.619375  320707 kubeadm.go:587] duration metric: took 3.040077146s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 02:43:59.619410  320707 node_conditions.go:102] verifying NodePressure condition ...
	I1119 02:43:59.621869  320707 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1119 02:43:59.621891  320707 node_conditions.go:123] node cpu capacity is 8
	I1119 02:43:59.621903  320707 node_conditions.go:105] duration metric: took 2.488454ms to run NodePressure ...
	I1119 02:43:59.621915  320707 start.go:242] waiting for startup goroutines ...
	I1119 02:43:59.621927  320707 start.go:247] waiting for cluster config update ...
	I1119 02:43:59.621944  320707 start.go:256] writing updated cluster config ...
	I1119 02:43:59.622252  320707 ssh_runner.go:195] Run: rm -f paused
	I1119 02:43:59.625986  320707 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 02:43:59.629194  320707 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-6zqr2" in "kube-system" namespace to be "Ready" or be gone ...
	W1119 02:44:01.634839  320707 pod_ready.go:104] pod "coredns-66bc5c9577-6zqr2" is not "Ready", error: <nil>
	W1119 02:44:03.648649  320707 pod_ready.go:104] pod "coredns-66bc5c9577-6zqr2" is not "Ready", error: <nil>
	I1119 02:44:02.192264  321785 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1119 02:44:02.192303  321785 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1119 02:44:02.192351  321785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-167150
	I1119 02:44:02.228424  321785 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 02:44:02.228512  321785 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 02:44:02.228576  321785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-167150
	I1119 02:44:02.229874  321785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/default-k8s-diff-port-167150/id_rsa Username:docker}
	I1119 02:44:02.242125  321785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/default-k8s-diff-port-167150/id_rsa Username:docker}
	I1119 02:44:02.258817  321785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/default-k8s-diff-port-167150/id_rsa Username:docker}
	I1119 02:44:02.350466  321785 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 02:44:02.369604  321785 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 02:44:02.373678  321785 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-167150" to be "Ready" ...
	I1119 02:44:02.388816  321785 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1119 02:44:02.388837  321785 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1119 02:44:02.411020  321785 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 02:44:02.424225  321785 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1119 02:44:02.424248  321785 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1119 02:44:02.447469  321785 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1119 02:44:02.447552  321785 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1119 02:44:02.487961  321785 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1119 02:44:02.487999  321785 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1119 02:44:02.527470  321785 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1119 02:44:02.527493  321785 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1119 02:44:02.551037  321785 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1119 02:44:02.551088  321785 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1119 02:44:02.570910  321785 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1119 02:44:02.570931  321785 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1119 02:44:02.593351  321785 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1119 02:44:02.593378  321785 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1119 02:44:02.609670  321785 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1119 02:44:02.609707  321785 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1119 02:44:02.628840  321785 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1119 02:44:03.629220  322722 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 02:44:03.629285  322722 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 02:44:03.629333  322722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-837474
	I1119 02:44:03.654971  322722 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 02:44:03.655112  322722 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 02:44:03.655206  322722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-837474
	I1119 02:44:03.665465  322722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/no-preload-837474/id_rsa Username:docker}
	I1119 02:44:03.688637  322722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/no-preload-837474/id_rsa Username:docker}
	I1119 02:44:03.699429  322722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/no-preload-837474/id_rsa Username:docker}
	I1119 02:44:03.884610  322722 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 02:44:03.901340  322722 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1119 02:44:03.901363  322722 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1119 02:44:03.904000  322722 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 02:44:03.930040  322722 node_ready.go:35] waiting up to 6m0s for node "no-preload-837474" to be "Ready" ...
	I1119 02:44:03.932330  322722 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 02:44:03.952140  322722 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1119 02:44:03.952215  322722 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1119 02:44:04.028462  322722 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1119 02:44:04.028497  322722 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1119 02:44:04.085157  322722 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1119 02:44:04.085199  322722 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1119 02:44:04.125412  322722 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1119 02:44:04.125519  322722 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1119 02:44:04.160779  322722 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1119 02:44:04.160804  322722 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1119 02:44:04.191114  322722 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1119 02:44:04.191138  322722 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1119 02:44:04.220706  322722 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1119 02:44:04.220794  322722 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1119 02:44:04.258403  322722 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1119 02:44:04.258507  322722 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1119 02:44:04.305765  322722 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1119 02:44:05.129790  321785 node_ready.go:49] node "default-k8s-diff-port-167150" is "Ready"
	I1119 02:44:05.129822  321785 node_ready.go:38] duration metric: took 2.756116341s for node "default-k8s-diff-port-167150" to be "Ready" ...
	I1119 02:44:05.129837  321785 api_server.go:52] waiting for apiserver process to appear ...
	I1119 02:44:05.129885  321785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 02:44:06.256339  321785 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.886696988s)
	I1119 02:44:06.256572  321785 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.627678683s)
	I1119 02:44:06.256753  321785 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.845352931s)
	I1119 02:44:06.256787  321785 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.126891057s)
	I1119 02:44:06.257139  321785 api_server.go:72] duration metric: took 4.108035883s to wait for apiserver process to appear ...
	I1119 02:44:06.257150  321785 api_server.go:88] waiting for apiserver healthz status ...
	I1119 02:44:06.257168  321785 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I1119 02:44:06.259066  321785 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-167150 addons enable metrics-server
	
	I1119 02:44:06.263303  321785 api_server.go:279] https://192.168.94.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 02:44:06.263364  321785 api_server.go:103] status: https://192.168.94.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 02:44:06.265950  321785 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1119 02:44:06.580020  322722 node_ready.go:49] node "no-preload-837474" is "Ready"
	I1119 02:44:06.580054  322722 node_ready.go:38] duration metric: took 2.649979229s for node "no-preload-837474" to be "Ready" ...
	I1119 02:44:06.580071  322722 api_server.go:52] waiting for apiserver process to appear ...
	I1119 02:44:06.580123  322722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 02:44:07.361047  322722 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.457013506s)
	I1119 02:44:07.361139  322722 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.428630966s)
	I1119 02:44:07.361483  322722 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.055682339s)
	I1119 02:44:07.361653  322722 api_server.go:72] duration metric: took 3.779702096s to wait for apiserver process to appear ...
	I1119 02:44:07.361664  322722 api_server.go:88] waiting for apiserver healthz status ...
	I1119 02:44:07.361683  322722 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1119 02:44:07.363457  322722 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-837474 addons enable metrics-server
	
	I1119 02:44:07.368960  322722 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1119 02:44:05.648870  320707 pod_ready.go:104] pod "coredns-66bc5c9577-6zqr2" is not "Ready", error: <nil>
	W1119 02:44:08.137496  320707 pod_ready.go:104] pod "coredns-66bc5c9577-6zqr2" is not "Ready", error: <nil>
	I1119 02:44:06.267236  321785 addons.go:515] duration metric: took 4.117758775s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1119 02:44:06.757293  321785 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I1119 02:44:06.765820  321785 api_server.go:279] https://192.168.94.2:8444/healthz returned 200:
	ok
	I1119 02:44:06.768145  321785 api_server.go:141] control plane version: v1.34.1
	I1119 02:44:06.768228  321785 api_server.go:131] duration metric: took 511.070321ms to wait for apiserver health ...
	I1119 02:44:06.768253  321785 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 02:44:06.772856  321785 system_pods.go:59] 8 kube-system pods found
	I1119 02:44:06.772896  321785 system_pods.go:61] "coredns-66bc5c9577-bht2q" [67eaa46f-0f14-47fe-b518-8fc2339ac090] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:44:06.772908  321785 system_pods.go:61] "etcd-default-k8s-diff-port-167150" [ac29d08c-2178-4113-8fe6-ea4363113e84] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1119 02:44:06.772916  321785 system_pods.go:61] "kindnet-rs6jh" [05ae880f-e69c-4513-b3ab-f76b85c4ac98] Running
	I1119 02:44:06.772925  321785 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-167150" [4d716863-a958-4aa4-ac71-8630c57c1676] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1119 02:44:06.772935  321785 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-167150" [5137eb7b-c71f-43be-a32f-908e744cb6c5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1119 02:44:06.772942  321785 system_pods.go:61] "kube-proxy-8gl4n" [33cee4c4-dbb5-4bc2-becb-ef2654e266b0] Running
	I1119 02:44:06.772950  321785 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-167150" [abe76fdb-41ae-498d-93bd-05734e7bdc8a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1119 02:44:06.772955  321785 system_pods.go:61] "storage-provisioner" [03ff5a52-b9d1-454f-ab4c-ca75268b32ef] Running
	I1119 02:44:06.772963  321785 system_pods.go:74] duration metric: took 4.692396ms to wait for pod list to return data ...
	I1119 02:44:06.772972  321785 default_sa.go:34] waiting for default service account to be created ...
	I1119 02:44:06.775941  321785 default_sa.go:45] found service account: "default"
	I1119 02:44:06.775962  321785 default_sa.go:55] duration metric: took 2.983868ms for default service account to be created ...
	I1119 02:44:06.775973  321785 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 02:44:06.779427  321785 system_pods.go:86] 8 kube-system pods found
	I1119 02:44:06.779492  321785 system_pods.go:89] "coredns-66bc5c9577-bht2q" [67eaa46f-0f14-47fe-b518-8fc2339ac090] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:44:06.779504  321785 system_pods.go:89] "etcd-default-k8s-diff-port-167150" [ac29d08c-2178-4113-8fe6-ea4363113e84] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1119 02:44:06.779511  321785 system_pods.go:89] "kindnet-rs6jh" [05ae880f-e69c-4513-b3ab-f76b85c4ac98] Running
	I1119 02:44:06.779520  321785 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-167150" [4d716863-a958-4aa4-ac71-8630c57c1676] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1119 02:44:06.779531  321785 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-167150" [5137eb7b-c71f-43be-a32f-908e744cb6c5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1119 02:44:06.779537  321785 system_pods.go:89] "kube-proxy-8gl4n" [33cee4c4-dbb5-4bc2-becb-ef2654e266b0] Running
	I1119 02:44:06.779549  321785 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-167150" [abe76fdb-41ae-498d-93bd-05734e7bdc8a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1119 02:44:06.779555  321785 system_pods.go:89] "storage-provisioner" [03ff5a52-b9d1-454f-ab4c-ca75268b32ef] Running
	I1119 02:44:06.779564  321785 system_pods.go:126] duration metric: took 3.584283ms to wait for k8s-apps to be running ...
	I1119 02:44:06.779578  321785 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 02:44:06.779625  321785 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 02:44:06.799756  321785 system_svc.go:56] duration metric: took 20.168188ms WaitForService to wait for kubelet
	I1119 02:44:06.799789  321785 kubeadm.go:587] duration metric: took 4.650687088s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 02:44:06.799820  321785 node_conditions.go:102] verifying NodePressure condition ...
	I1119 02:44:06.806693  321785 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1119 02:44:06.806725  321785 node_conditions.go:123] node cpu capacity is 8
	I1119 02:44:06.806741  321785 node_conditions.go:105] duration metric: took 6.915023ms to run NodePressure ...
	I1119 02:44:06.806756  321785 start.go:242] waiting for startup goroutines ...
	I1119 02:44:06.806765  321785 start.go:247] waiting for cluster config update ...
	I1119 02:44:06.806780  321785 start.go:256] writing updated cluster config ...
	I1119 02:44:06.807093  321785 ssh_runner.go:195] Run: rm -f paused
	I1119 02:44:06.812498  321785 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 02:44:06.819257  321785 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-bht2q" in "kube-system" namespace to be "Ready" or be gone ...
	W1119 02:44:08.826304  321785 pod_ready.go:104] pod "coredns-66bc5c9577-bht2q" is not "Ready", error: <nil>
	I1119 02:44:07.369797  322722 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 02:44:07.369828  322722 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 02:44:07.370339  322722 addons.go:515] duration metric: took 3.788144041s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1119 02:44:07.862007  322722 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1119 02:44:07.869064  322722 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 02:44:07.869276  322722 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 02:44:08.362770  322722 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1119 02:44:08.368557  322722 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1119 02:44:08.369831  322722 api_server.go:141] control plane version: v1.34.1
	I1119 02:44:08.369859  322722 api_server.go:131] duration metric: took 1.008187663s to wait for apiserver health ...
	I1119 02:44:08.369870  322722 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 02:44:08.375029  322722 system_pods.go:59] 8 kube-system pods found
	I1119 02:44:08.375075  322722 system_pods.go:61] "coredns-66bc5c9577-44bdr" [9ad0000a-752a-4a18-a649-dd63b3e638d9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:44:08.375108  322722 system_pods.go:61] "etcd-no-preload-837474" [66ccbb50-4995-4789-9bcc-97834e6635a4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1119 02:44:08.375120  322722 system_pods.go:61] "kindnet-96d7l" [d8eb5197-7836-4ec2-9fe3-e6354983a150] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1119 02:44:08.375138  322722 system_pods.go:61] "kube-apiserver-no-preload-837474" [fa87d2c9-bf36-4d63-9093-91435507e9f9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1119 02:44:08.375147  322722 system_pods.go:61] "kube-controller-manager-no-preload-837474" [c56b763c-5153-431e-9d68-848077ed8eff] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1119 02:44:08.375155  322722 system_pods.go:61] "kube-proxy-hmxzk" [0cf4c9ca-e9e0-4bcc-8a93-40ff2d54df4b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1119 02:44:08.375198  322722 system_pods.go:61] "kube-scheduler-no-preload-837474" [0b5dd44b-f40f-4f7e-89e0-c67cc486a8d7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1119 02:44:08.375207  322722 system_pods.go:61] "storage-provisioner" [7b82e1eb-4a04-4145-8163-28073775b6ed] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 02:44:08.375213  322722 system_pods.go:74] duration metric: took 5.337805ms to wait for pod list to return data ...
	I1119 02:44:08.375226  322722 default_sa.go:34] waiting for default service account to be created ...
	I1119 02:44:08.378133  322722 default_sa.go:45] found service account: "default"
	I1119 02:44:08.378153  322722 default_sa.go:55] duration metric: took 2.920658ms for default service account to be created ...
	I1119 02:44:08.378163  322722 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 02:44:08.381245  322722 system_pods.go:86] 8 kube-system pods found
	I1119 02:44:08.381275  322722 system_pods.go:89] "coredns-66bc5c9577-44bdr" [9ad0000a-752a-4a18-a649-dd63b3e638d9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:44:08.381285  322722 system_pods.go:89] "etcd-no-preload-837474" [66ccbb50-4995-4789-9bcc-97834e6635a4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1119 02:44:08.381295  322722 system_pods.go:89] "kindnet-96d7l" [d8eb5197-7836-4ec2-9fe3-e6354983a150] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1119 02:44:08.381306  322722 system_pods.go:89] "kube-apiserver-no-preload-837474" [fa87d2c9-bf36-4d63-9093-91435507e9f9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1119 02:44:08.381318  322722 system_pods.go:89] "kube-controller-manager-no-preload-837474" [c56b763c-5153-431e-9d68-848077ed8eff] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1119 02:44:08.381327  322722 system_pods.go:89] "kube-proxy-hmxzk" [0cf4c9ca-e9e0-4bcc-8a93-40ff2d54df4b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1119 02:44:08.381337  322722 system_pods.go:89] "kube-scheduler-no-preload-837474" [0b5dd44b-f40f-4f7e-89e0-c67cc486a8d7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1119 02:44:08.381349  322722 system_pods.go:89] "storage-provisioner" [7b82e1eb-4a04-4145-8163-28073775b6ed] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 02:44:08.381356  322722 system_pods.go:126] duration metric: took 3.187892ms to wait for k8s-apps to be running ...
	I1119 02:44:08.381365  322722 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 02:44:08.381412  322722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 02:44:08.399828  322722 system_svc.go:56] duration metric: took 18.454262ms WaitForService to wait for kubelet
	I1119 02:44:08.399865  322722 kubeadm.go:587] duration metric: took 4.817907193s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 02:44:08.399888  322722 node_conditions.go:102] verifying NodePressure condition ...
	I1119 02:44:08.403901  322722 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1119 02:44:08.403929  322722 node_conditions.go:123] node cpu capacity is 8
	I1119 02:44:08.403944  322722 node_conditions.go:105] duration metric: took 4.04954ms to run NodePressure ...
	I1119 02:44:08.403957  322722 start.go:242] waiting for startup goroutines ...
	I1119 02:44:08.403967  322722 start.go:247] waiting for cluster config update ...
	I1119 02:44:08.403980  322722 start.go:256] writing updated cluster config ...
	I1119 02:44:08.404292  322722 ssh_runner.go:195] Run: rm -f paused
	I1119 02:44:08.409302  322722 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 02:44:08.414887  322722 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-44bdr" in "kube-system" namespace to be "Ready" or be gone ...
	W1119 02:44:10.421293  322722 pod_ready.go:104] pod "coredns-66bc5c9577-44bdr" is not "Ready", error: <nil>
	W1119 02:44:10.637123  320707 pod_ready.go:104] pod "coredns-66bc5c9577-6zqr2" is not "Ready", error: <nil>
	W1119 02:44:13.136638  320707 pod_ready.go:104] pod "coredns-66bc5c9577-6zqr2" is not "Ready", error: <nil>
	W1119 02:44:11.324360  321785 pod_ready.go:104] pod "coredns-66bc5c9577-bht2q" is not "Ready", error: <nil>
	W1119 02:44:13.326307  321785 pod_ready.go:104] pod "coredns-66bc5c9577-bht2q" is not "Ready", error: <nil>
	W1119 02:44:12.421604  322722 pod_ready.go:104] pod "coredns-66bc5c9577-44bdr" is not "Ready", error: <nil>
	W1119 02:44:14.422915  322722 pod_ready.go:104] pod "coredns-66bc5c9577-44bdr" is not "Ready", error: <nil>
	W1119 02:44:15.656120  320707 pod_ready.go:104] pod "coredns-66bc5c9577-6zqr2" is not "Ready", error: <nil>
	W1119 02:44:18.136265  320707 pod_ready.go:104] pod "coredns-66bc5c9577-6zqr2" is not "Ready", error: <nil>
	W1119 02:44:15.824839  321785 pod_ready.go:104] pod "coredns-66bc5c9577-bht2q" is not "Ready", error: <nil>
	W1119 02:44:17.825074  321785 pod_ready.go:104] pod "coredns-66bc5c9577-bht2q" is not "Ready", error: <nil>
	W1119 02:44:16.459334  322722 pod_ready.go:104] pod "coredns-66bc5c9577-44bdr" is not "Ready", error: <nil>
	W1119 02:44:18.920572  322722 pod_ready.go:104] pod "coredns-66bc5c9577-44bdr" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Nov 19 02:44:09 old-k8s-version-987573 crio[566]: time="2025-11-19T02:44:09.942850312Z" level=info msg="Created container 09c387d074a5772ce6728e2da9a5c2f5ec89c2549480300c3fd742b2ab4f32d0: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4jn6m/dashboard-metrics-scraper" id=99d1bf7f-e890-44c4-9b77-96015115f15d name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 02:44:09 old-k8s-version-987573 crio[566]: time="2025-11-19T02:44:09.943471073Z" level=info msg="Starting container: 09c387d074a5772ce6728e2da9a5c2f5ec89c2549480300c3fd742b2ab4f32d0" id=c3633666-7efd-423e-a7b5-0602063e76c5 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 02:44:09 old-k8s-version-987573 crio[566]: time="2025-11-19T02:44:09.945678998Z" level=info msg="Started container" PID=1731 containerID=09c387d074a5772ce6728e2da9a5c2f5ec89c2549480300c3fd742b2ab4f32d0 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4jn6m/dashboard-metrics-scraper id=c3633666-7efd-423e-a7b5-0602063e76c5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4712eb504baa8295a5393acd51be4ae787d7ec34ae40a05f03ea986ccd250eaa
	Nov 19 02:44:10 old-k8s-version-987573 crio[566]: time="2025-11-19T02:44:10.52760866Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=ddfb020d-e0cc-4bb7-88ff-11e59f724789 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 02:44:10 old-k8s-version-987573 crio[566]: time="2025-11-19T02:44:10.531283886Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=18f5ba74-9ab8-4d51-bf40-099a1e7fa6d2 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 02:44:10 old-k8s-version-987573 crio[566]: time="2025-11-19T02:44:10.535543116Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4jn6m/dashboard-metrics-scraper" id=60bc11de-9b24-4f2f-959b-f50653be2bbe name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 02:44:10 old-k8s-version-987573 crio[566]: time="2025-11-19T02:44:10.535847848Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:44:10 old-k8s-version-987573 crio[566]: time="2025-11-19T02:44:10.545738061Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:44:10 old-k8s-version-987573 crio[566]: time="2025-11-19T02:44:10.54641983Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:44:10 old-k8s-version-987573 crio[566]: time="2025-11-19T02:44:10.582107034Z" level=info msg="Created container c8e88b5f77554f0c0105232fbd1aa6d9713330da86439ef7081b285a8151c78e: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4jn6m/dashboard-metrics-scraper" id=60bc11de-9b24-4f2f-959b-f50653be2bbe name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 02:44:10 old-k8s-version-987573 crio[566]: time="2025-11-19T02:44:10.582802727Z" level=info msg="Starting container: c8e88b5f77554f0c0105232fbd1aa6d9713330da86439ef7081b285a8151c78e" id=bb7a49f2-66fd-41f8-97e3-8ad60d92a8c4 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 02:44:10 old-k8s-version-987573 crio[566]: time="2025-11-19T02:44:10.58518058Z" level=info msg="Started container" PID=1742 containerID=c8e88b5f77554f0c0105232fbd1aa6d9713330da86439ef7081b285a8151c78e description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4jn6m/dashboard-metrics-scraper id=bb7a49f2-66fd-41f8-97e3-8ad60d92a8c4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4712eb504baa8295a5393acd51be4ae787d7ec34ae40a05f03ea986ccd250eaa
	Nov 19 02:44:11 old-k8s-version-987573 crio[566]: time="2025-11-19T02:44:11.533671243Z" level=info msg="Removing container: 09c387d074a5772ce6728e2da9a5c2f5ec89c2549480300c3fd742b2ab4f32d0" id=50fe67d6-ef1d-4157-9866-2f8fe8689a8b name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 19 02:44:11 old-k8s-version-987573 crio[566]: time="2025-11-19T02:44:11.544535919Z" level=info msg="Removed container 09c387d074a5772ce6728e2da9a5c2f5ec89c2549480300c3fd742b2ab4f32d0: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4jn6m/dashboard-metrics-scraper" id=50fe67d6-ef1d-4157-9866-2f8fe8689a8b name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 19 02:44:17 old-k8s-version-987573 crio[566]: time="2025-11-19T02:44:17.550684926Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=91cd4a17-1c4d-4db1-8279-699e1b19e3f7 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 02:44:17 old-k8s-version-987573 crio[566]: time="2025-11-19T02:44:17.551667041Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=27f39c23-e440-4c51-9daf-0eae1046f23b name=/runtime.v1.ImageService/ImageStatus
	Nov 19 02:44:17 old-k8s-version-987573 crio[566]: time="2025-11-19T02:44:17.552842299Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=3f94c313-40c7-46c1-8f46-24f0c24eba55 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 02:44:17 old-k8s-version-987573 crio[566]: time="2025-11-19T02:44:17.552974512Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:44:17 old-k8s-version-987573 crio[566]: time="2025-11-19T02:44:17.559892609Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:44:17 old-k8s-version-987573 crio[566]: time="2025-11-19T02:44:17.560074759Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/d299ab7a83f816fb830e8cca9867be86ab2e3ad0cf4e7b8a8124bb0648488a7f/merged/etc/passwd: no such file or directory"
	Nov 19 02:44:17 old-k8s-version-987573 crio[566]: time="2025-11-19T02:44:17.560110239Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/d299ab7a83f816fb830e8cca9867be86ab2e3ad0cf4e7b8a8124bb0648488a7f/merged/etc/group: no such file or directory"
	Nov 19 02:44:17 old-k8s-version-987573 crio[566]: time="2025-11-19T02:44:17.560405273Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:44:17 old-k8s-version-987573 crio[566]: time="2025-11-19T02:44:17.588414705Z" level=info msg="Created container 2077593bc532cc1d14a90e1072ea50812cb23766a988f2cc7d1e6b8f14c3b0ee: kube-system/storage-provisioner/storage-provisioner" id=3f94c313-40c7-46c1-8f46-24f0c24eba55 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 02:44:17 old-k8s-version-987573 crio[566]: time="2025-11-19T02:44:17.589009111Z" level=info msg="Starting container: 2077593bc532cc1d14a90e1072ea50812cb23766a988f2cc7d1e6b8f14c3b0ee" id=632988fd-6a65-44e5-affc-3312f0f4b4f8 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 02:44:17 old-k8s-version-987573 crio[566]: time="2025-11-19T02:44:17.591202411Z" level=info msg="Started container" PID=1757 containerID=2077593bc532cc1d14a90e1072ea50812cb23766a988f2cc7d1e6b8f14c3b0ee description=kube-system/storage-provisioner/storage-provisioner id=632988fd-6a65-44e5-affc-3312f0f4b4f8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=48171d9453f2b4a1efe0b015f63dac5b61dd498fdddb79e04903aae4733578dc
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	2077593bc532c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           6 seconds ago       Running             storage-provisioner         1                   48171d9453f2b       storage-provisioner                              kube-system
	c8e88b5f77554       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           13 seconds ago      Exited              dashboard-metrics-scraper   1                   4712eb504baa8       dashboard-metrics-scraper-5f989dc9cf-4jn6m       kubernetes-dashboard
	a0016258b0d08       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   17 seconds ago      Running             kubernetes-dashboard        0                   af7f54ad0c3da       kubernetes-dashboard-8694d4445c-mshqj            kubernetes-dashboard
	0be6a37a59224       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           33 seconds ago      Running             coredns                     0                   29266351cd326       coredns-5dd5756b68-djd8r                         kube-system
	f499f6a025ea2       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           33 seconds ago      Running             busybox                     1                   58fccff701069       busybox                                          default
	9838e80b4a113       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           37 seconds ago      Exited              storage-provisioner         0                   48171d9453f2b       storage-provisioner                              kube-system
	de11ec22f706c       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           37 seconds ago      Running             kube-proxy                  0                   eac27cfb28d5c       kube-proxy-tmqhk                                 kube-system
	5ade0b93851f2       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           37 seconds ago      Running             kindnet-cni                 0                   a872c47f30de9       kindnet-57t4v                                    kube-system
	698573dc69a8a       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           40 seconds ago      Running             kube-controller-manager     0                   6c384a9b3e8a5       kube-controller-manager-old-k8s-version-987573   kube-system
	0b21b4a61c9e3       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           40 seconds ago      Running             kube-scheduler              0                   2398c1d185249       kube-scheduler-old-k8s-version-987573            kube-system
	52e10aa72ed87       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           40 seconds ago      Running             kube-apiserver              0                   6751465260e59       kube-apiserver-old-k8s-version-987573            kube-system
	a3a95c851a1b1       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           40 seconds ago      Running             etcd                        0                   0e512b64a736f       etcd-old-k8s-version-987573                      kube-system
	
	
	==> coredns [0be6a37a592243280ea5c142186391f3f2f26b568b8a07102398749bd16bb41a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:56034 - 51200 "HINFO IN 6212132219970849905.7332513038716120323. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.457041818s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-987573
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-987573
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ebd49efe8b7d44f89fc96e21beffcd64dfee8277
	                    minikube.k8s.io/name=old-k8s-version-987573
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T02_42_42_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 02:42:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-987573
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 02:44:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 02:44:17 +0000   Wed, 19 Nov 2025 02:42:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 02:44:17 +0000   Wed, 19 Nov 2025 02:42:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 02:44:17 +0000   Wed, 19 Nov 2025 02:42:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 02:44:17 +0000   Wed, 19 Nov 2025 02:43:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-987573
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863340Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863340Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf10fb2f940d419c1d138723691cfee8
	  System UUID:                7b61050c-e4d6-47f6-aa9c-d45cf03b4e83
	  Boot ID:                    b74e7f3d-91af-4c1b-82f6-1745dfd2e678
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         73s
	  kube-system                 coredns-5dd5756b68-djd8r                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     90s
	  kube-system                 etcd-old-k8s-version-987573                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         103s
	  kube-system                 kindnet-57t4v                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      90s
	  kube-system                 kube-apiserver-old-k8s-version-987573             250m (3%)     0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 kube-controller-manager-old-k8s-version-987573    200m (2%)     0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 kube-proxy-tmqhk                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 kube-scheduler-old-k8s-version-987573             100m (1%)     0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-4jn6m        0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-mshqj             0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 88s                kube-proxy       
	  Normal  Starting                 37s                kube-proxy       
	  Normal  NodeHasSufficientMemory  103s               kubelet          Node old-k8s-version-987573 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    103s               kubelet          Node old-k8s-version-987573 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     103s               kubelet          Node old-k8s-version-987573 status is now: NodeHasSufficientPID
	  Normal  Starting                 103s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           90s                node-controller  Node old-k8s-version-987573 event: Registered Node old-k8s-version-987573 in Controller
	  Normal  NodeReady                76s                kubelet          Node old-k8s-version-987573 status is now: NodeReady
	  Normal  Starting                 41s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  41s (x8 over 41s)  kubelet          Node old-k8s-version-987573 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    41s (x8 over 41s)  kubelet          Node old-k8s-version-987573 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     41s (x8 over 41s)  kubelet          Node old-k8s-version-987573 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           25s                node-controller  Node old-k8s-version-987573 event: Registered Node old-k8s-version-987573 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: c6 2a 10 38 98 ce 7e af 25 93 50 8b 08 00
	[Nov19 02:40] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 69 19 13 d2 34 08 06
	[  +0.000303] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff de 82 c7 57 ef 49 08 06
	[Nov19 02:41] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 82 b6 eb cd 7e a7 08 06
	[  +0.001170] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 6a 20 a4 3b 82 10 08 06
	[ +12.842438] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 1e 35 97 65 5e 2e 08 06
	[  +4.187285] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ba 20 d4 3c 25 0c 08 06
	[ +19.742639] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6e e8 d1 08 45 d2 08 06
	[  +0.000327] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ba 20 d4 3c 25 0c 08 06
	[Nov19 02:42] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 2b 58 8a 05 dc 08 06
	[  +0.000340] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 b6 eb cd 7e a7 08 06
	[ +10.661146] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b6 1d bb 8d c6 48 08 06
	[  +0.000382] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 1e 35 97 65 5e 2e 08 06
	
	
	==> etcd [a3a95c851a1b1a7b23770436d155ba0f868406c9e5408bb1d6b801e15b851212] <==
	{"level":"info","ts":"2025-11-19T02:43:43.985447Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-11-19T02:43:43.985545Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-11-19T02:43:43.985657Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-19T02:43:43.985687Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-19T02:43:43.988883Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-19T02:43:43.989091Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-19T02:43:43.989102Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-19T02:43:43.989143Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-19T02:43:43.989152Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-19T02:43:45.277648Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-19T02:43:45.277689Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-19T02:43:45.277711Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-11-19T02:43:45.277725Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-11-19T02:43:45.277731Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-11-19T02:43:45.277739Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-11-19T02:43:45.277745Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-11-19T02:43:45.279139Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-987573 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-19T02:43:45.279141Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-19T02:43:45.279158Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-19T02:43:45.279855Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-19T02:43:45.279956Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-19T02:43:45.281182Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-11-19T02:43:45.281359Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-19T02:44:07.676479Z","caller":"traceutil/trace.go:171","msg":"trace[484018559] transaction","detail":"{read_only:false; response_revision:580; number_of_response:1; }","duration":"129.241126ms","start":"2025-11-19T02:44:07.547179Z","end":"2025-11-19T02:44:07.676421Z","steps":["trace[484018559] 'process raft request'  (duration: 128.852937ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T02:44:07.676944Z","caller":"traceutil/trace.go:171","msg":"trace[159111979] transaction","detail":"{read_only:false; response_revision:581; number_of_response:1; }","duration":"129.540798ms","start":"2025-11-19T02:44:07.547253Z","end":"2025-11-19T02:44:07.676794Z","steps":["trace[159111979] 'process raft request'  (duration: 129.004708ms)"],"step_count":1}
	
	
	==> kernel <==
	 02:44:24 up  1:26,  0 user,  load average: 4.55, 3.51, 2.32
	Linux old-k8s-version-987573 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5ade0b93851f29eec6e4a88852e33c753160da8ea44034a1ae4d3403b4213d7b] <==
	I1119 02:43:46.948216       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 02:43:46.948419       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1119 02:43:47.041721       1 main.go:148] setting mtu 1500 for CNI 
	I1119 02:43:47.041752       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 02:43:47.041774       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T02:43:47Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 02:43:47.240833       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 02:43:47.240878       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 02:43:47.240889       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 02:43:47.241004       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1119 02:43:47.641132       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1119 02:43:47.641157       1 metrics.go:72] Registering metrics
	I1119 02:43:47.641198       1 controller.go:711] "Syncing nftables rules"
	I1119 02:43:57.240458       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1119 02:43:57.240498       1 main.go:301] handling current node
	I1119 02:44:07.242546       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1119 02:44:07.243161       1 main.go:301] handling current node
	I1119 02:44:17.240570       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1119 02:44:17.240616       1 main.go:301] handling current node
	
	
	==> kube-apiserver [52e10aa72ed87bdafda6e448ab0fe9236452ea9f877e2c66f9761af96e094140] <==
	I1119 02:43:46.196743       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 02:43:46.244143       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1119 02:43:46.253526       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1119 02:43:46.253614       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1119 02:43:46.253755       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1119 02:43:46.253850       1 shared_informer.go:318] Caches are synced for configmaps
	I1119 02:43:46.253531       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1119 02:43:46.254608       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1119 02:43:46.254642       1 aggregator.go:166] initial CRD sync complete...
	I1119 02:43:46.254650       1 autoregister_controller.go:141] Starting autoregister controller
	I1119 02:43:46.254656       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1119 02:43:46.254663       1 cache.go:39] Caches are synced for autoregister controller
	E1119 02:43:46.259298       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1119 02:43:46.273499       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1119 02:43:47.048889       1 controller.go:624] quota admission added evaluator for: namespaces
	I1119 02:43:47.078934       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1119 02:43:47.095382       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1119 02:43:47.102719       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1119 02:43:47.109403       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1119 02:43:47.144372       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.108.16.66"}
	I1119 02:43:47.146688       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 02:43:47.159406       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.44.239"}
	I1119 02:43:59.292757       1 controller.go:624] quota admission added evaluator for: endpoints
	I1119 02:43:59.441786       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1119 02:43:59.543647       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [698573dc69a8a06012cd23a1989bd77a62894912ddd2392fb3c8adab817e74a2] <==
	I1119 02:43:59.375581       1 shared_informer.go:318] Caches are synced for stateful set
	I1119 02:43:59.411929       1 shared_informer.go:318] Caches are synced for persistent volume
	I1119 02:43:59.416268       1 shared_informer.go:318] Caches are synced for ephemeral
	I1119 02:43:59.438570       1 shared_informer.go:318] Caches are synced for attach detach
	I1119 02:43:59.438590       1 shared_informer.go:318] Caches are synced for expand
	I1119 02:43:59.443955       1 shared_informer.go:318] Caches are synced for resource quota
	I1119 02:43:59.548201       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-5f989dc9cf to 1"
	I1119 02:43:59.548340       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8694d4445c to 1"
	I1119 02:43:59.746937       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-mshqj"
	I1119 02:43:59.749136       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-4jn6m"
	I1119 02:43:59.755834       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="208.07463ms"
	I1119 02:43:59.757919       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="210.141586ms"
	I1119 02:43:59.762361       1 shared_informer.go:318] Caches are synced for garbage collector
	I1119 02:43:59.768547       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="10.174262ms"
	I1119 02:43:59.768631       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="46.237µs"
	I1119 02:43:59.770582       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="14.205953ms"
	I1119 02:43:59.770679       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="51.945µs"
	I1119 02:43:59.776410       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="40.708µs"
	I1119 02:43:59.787790       1 shared_informer.go:318] Caches are synced for garbage collector
	I1119 02:43:59.787813       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1119 02:44:07.680387       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="141.155183ms"
	I1119 02:44:07.680648       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="115.957µs"
	I1119 02:44:10.542570       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="66.841µs"
	I1119 02:44:11.545558       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="99.528µs"
	I1119 02:44:12.550825       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="128.918µs"
	
	
	==> kube-proxy [de11ec22f706c83cf86b25166cee8deb1a767a0d7161e431ab4ff464ea56370e] <==
	I1119 02:43:46.814188       1 server_others.go:69] "Using iptables proxy"
	I1119 02:43:46.823184       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1119 02:43:46.843635       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 02:43:46.846784       1 server_others.go:152] "Using iptables Proxier"
	I1119 02:43:46.846829       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1119 02:43:46.846837       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1119 02:43:46.846880       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1119 02:43:46.847130       1 server.go:846] "Version info" version="v1.28.0"
	I1119 02:43:46.847143       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 02:43:46.849616       1 config.go:315] "Starting node config controller"
	I1119 02:43:46.849645       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1119 02:43:46.849961       1 config.go:188] "Starting service config controller"
	I1119 02:43:46.849984       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1119 02:43:46.849963       1 config.go:97] "Starting endpoint slice config controller"
	I1119 02:43:46.850025       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1119 02:43:46.949931       1 shared_informer.go:318] Caches are synced for node config
	I1119 02:43:46.951075       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1119 02:43:46.951088       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [0b21b4a61c9e39b222029f13c6ca3c909e31c027914e269966be2589940c1b05] <==
	I1119 02:43:44.488625       1 serving.go:348] Generated self-signed cert in-memory
	W1119 02:43:46.176539       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1119 02:43:46.176579       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1119 02:43:46.176621       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1119 02:43:46.176645       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1119 02:43:46.202478       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1119 02:43:46.202503       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 02:43:46.203777       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 02:43:46.203816       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1119 02:43:46.204682       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1119 02:43:46.204713       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1119 02:43:46.304057       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 19 02:43:48 old-k8s-version-987573 kubelet[732]: E1119 02:43:48.092584     732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/38b8c793-304e-42c1-b2a0-ecd1032a5962-config-volume podName:38b8c793-304e-42c1-b2a0-ecd1032a5962 nodeName:}" failed. No retries permitted until 2025-11-19 02:43:50.092567656 +0000 UTC m=+6.746470075 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/38b8c793-304e-42c1-b2a0-ecd1032a5962-config-volume") pod "coredns-5dd5756b68-djd8r" (UID: "38b8c793-304e-42c1-b2a0-ecd1032a5962") : object "kube-system"/"coredns" not registered
	Nov 19 02:43:48 old-k8s-version-987573 kubelet[732]: E1119 02:43:48.193722     732 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Nov 19 02:43:48 old-k8s-version-987573 kubelet[732]: E1119 02:43:48.193750     732 projected.go:198] Error preparing data for projected volume kube-api-access-rj25l for pod default/busybox: object "default"/"kube-root-ca.crt" not registered
	Nov 19 02:43:48 old-k8s-version-987573 kubelet[732]: E1119 02:43:48.193802     732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9c204876-422a-41f9-9047-80e08d35da45-kube-api-access-rj25l podName:9c204876-422a-41f9-9047-80e08d35da45 nodeName:}" failed. No retries permitted until 2025-11-19 02:43:50.193788067 +0000 UTC m=+6.847690487 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-rj25l" (UniqueName: "kubernetes.io/projected/9c204876-422a-41f9-9047-80e08d35da45-kube-api-access-rj25l") pod "busybox" (UID: "9c204876-422a-41f9-9047-80e08d35da45") : object "default"/"kube-root-ca.crt" not registered
	Nov 19 02:43:55 old-k8s-version-987573 kubelet[732]: I1119 02:43:55.311083     732 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 19 02:43:59 old-k8s-version-987573 kubelet[732]: I1119 02:43:59.753137     732 topology_manager.go:215] "Topology Admit Handler" podUID="5857a23b-a4e9-46c7-8df9-28cdb04e7452" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-mshqj"
	Nov 19 02:43:59 old-k8s-version-987573 kubelet[732]: I1119 02:43:59.758633     732 topology_manager.go:215] "Topology Admit Handler" podUID="52a583d3-3a23-4e43-b437-210666e9d26a" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-4jn6m"
	Nov 19 02:43:59 old-k8s-version-987573 kubelet[732]: I1119 02:43:59.859634     732 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/5857a23b-a4e9-46c7-8df9-28cdb04e7452-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-mshqj\" (UID: \"5857a23b-a4e9-46c7-8df9-28cdb04e7452\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-mshqj"
	Nov 19 02:43:59 old-k8s-version-987573 kubelet[732]: I1119 02:43:59.859681     732 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qblx9\" (UniqueName: \"kubernetes.io/projected/52a583d3-3a23-4e43-b437-210666e9d26a-kube-api-access-qblx9\") pod \"dashboard-metrics-scraper-5f989dc9cf-4jn6m\" (UID: \"52a583d3-3a23-4e43-b437-210666e9d26a\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4jn6m"
	Nov 19 02:43:59 old-k8s-version-987573 kubelet[732]: I1119 02:43:59.859711     732 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/52a583d3-3a23-4e43-b437-210666e9d26a-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-4jn6m\" (UID: \"52a583d3-3a23-4e43-b437-210666e9d26a\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4jn6m"
	Nov 19 02:43:59 old-k8s-version-987573 kubelet[732]: I1119 02:43:59.859903     732 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlbm6\" (UniqueName: \"kubernetes.io/projected/5857a23b-a4e9-46c7-8df9-28cdb04e7452-kube-api-access-zlbm6\") pod \"kubernetes-dashboard-8694d4445c-mshqj\" (UID: \"5857a23b-a4e9-46c7-8df9-28cdb04e7452\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-mshqj"
	Nov 19 02:44:10 old-k8s-version-987573 kubelet[732]: I1119 02:44:10.526604     732 scope.go:117] "RemoveContainer" containerID="09c387d074a5772ce6728e2da9a5c2f5ec89c2549480300c3fd742b2ab4f32d0"
	Nov 19 02:44:10 old-k8s-version-987573 kubelet[732]: I1119 02:44:10.542122     732 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-mshqj" podStartSLOduration=4.873608043 podCreationTimestamp="2025-11-19 02:43:59 +0000 UTC" firstStartedPulling="2025-11-19 02:44:00.078181732 +0000 UTC m=+16.732084166" lastFinishedPulling="2025-11-19 02:44:06.74663661 +0000 UTC m=+23.400539038" observedRunningTime="2025-11-19 02:44:07.541761794 +0000 UTC m=+24.195664234" watchObservedRunningTime="2025-11-19 02:44:10.542062915 +0000 UTC m=+27.195965356"
	Nov 19 02:44:11 old-k8s-version-987573 kubelet[732]: I1119 02:44:11.531652     732 scope.go:117] "RemoveContainer" containerID="09c387d074a5772ce6728e2da9a5c2f5ec89c2549480300c3fd742b2ab4f32d0"
	Nov 19 02:44:11 old-k8s-version-987573 kubelet[732]: I1119 02:44:11.531806     732 scope.go:117] "RemoveContainer" containerID="c8e88b5f77554f0c0105232fbd1aa6d9713330da86439ef7081b285a8151c78e"
	Nov 19 02:44:11 old-k8s-version-987573 kubelet[732]: E1119 02:44:11.532201     732 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-4jn6m_kubernetes-dashboard(52a583d3-3a23-4e43-b437-210666e9d26a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4jn6m" podUID="52a583d3-3a23-4e43-b437-210666e9d26a"
	Nov 19 02:44:12 old-k8s-version-987573 kubelet[732]: I1119 02:44:12.536980     732 scope.go:117] "RemoveContainer" containerID="c8e88b5f77554f0c0105232fbd1aa6d9713330da86439ef7081b285a8151c78e"
	Nov 19 02:44:12 old-k8s-version-987573 kubelet[732]: E1119 02:44:12.537348     732 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-4jn6m_kubernetes-dashboard(52a583d3-3a23-4e43-b437-210666e9d26a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4jn6m" podUID="52a583d3-3a23-4e43-b437-210666e9d26a"
	Nov 19 02:44:17 old-k8s-version-987573 kubelet[732]: I1119 02:44:17.550149     732 scope.go:117] "RemoveContainer" containerID="9838e80b4a11306fc1d1ad1687e1efac6e8267ea4a072326986fd003834c2d07"
	Nov 19 02:44:20 old-k8s-version-987573 kubelet[732]: I1119 02:44:20.061397     732 scope.go:117] "RemoveContainer" containerID="c8e88b5f77554f0c0105232fbd1aa6d9713330da86439ef7081b285a8151c78e"
	Nov 19 02:44:20 old-k8s-version-987573 kubelet[732]: E1119 02:44:20.061731     732 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-4jn6m_kubernetes-dashboard(52a583d3-3a23-4e43-b437-210666e9d26a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4jn6m" podUID="52a583d3-3a23-4e43-b437-210666e9d26a"
	Nov 19 02:44:21 old-k8s-version-987573 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 19 02:44:21 old-k8s-version-987573 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 19 02:44:21 old-k8s-version-987573 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 19 02:44:21 old-k8s-version-987573 systemd[1]: kubelet.service: Consumed 1.214s CPU time.
	
	
	==> kubernetes-dashboard [a0016258b0d08479349678ea97b542cd6bed29e5be0daa43e282fc63d368df4b] <==
	2025/11/19 02:44:06 Using namespace: kubernetes-dashboard
	2025/11/19 02:44:06 Using in-cluster config to connect to apiserver
	2025/11/19 02:44:06 Using secret token for csrf signing
	2025/11/19 02:44:06 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/19 02:44:06 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/19 02:44:06 Successful initial request to the apiserver, version: v1.28.0
	2025/11/19 02:44:06 Generating JWE encryption key
	2025/11/19 02:44:06 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/19 02:44:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/19 02:44:07 Initializing JWE encryption key from synchronized object
	2025/11/19 02:44:07 Creating in-cluster Sidecar client
	2025/11/19 02:44:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/19 02:44:07 Serving insecurely on HTTP port: 9090
	2025/11/19 02:44:06 Starting overwatch
	
	
	==> storage-provisioner [2077593bc532cc1d14a90e1072ea50812cb23766a988f2cc7d1e6b8f14c3b0ee] <==
	I1119 02:44:17.605538       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1119 02:44:17.615971       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1119 02:44:17.616040       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	
	==> storage-provisioner [9838e80b4a11306fc1d1ad1687e1efac6e8267ea4a072326986fd003834c2d07] <==
	I1119 02:43:46.786538       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1119 02:44:16.789009       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-987573 -n old-k8s-version-987573
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-987573 -n old-k8s-version-987573: exit status 2 (315.243232ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-987573 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-987573
helpers_test.go:243: (dbg) docker inspect old-k8s-version-987573:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ae750ceb959b3b185a87925748a821dda02d033dac6848c57030347d6edbda71",
	        "Created": "2025-11-19T02:42:22.008498904Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 317433,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T02:43:37.301287263Z",
	            "FinishedAt": "2025-11-19T02:43:36.3737728Z"
	        },
	        "Image": "sha256:a368e3d71517ce17114afb6c9921965419df972dd0e2d32a9973a8946f0910a3",
	        "ResolvConfPath": "/var/lib/docker/containers/ae750ceb959b3b185a87925748a821dda02d033dac6848c57030347d6edbda71/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ae750ceb959b3b185a87925748a821dda02d033dac6848c57030347d6edbda71/hostname",
	        "HostsPath": "/var/lib/docker/containers/ae750ceb959b3b185a87925748a821dda02d033dac6848c57030347d6edbda71/hosts",
	        "LogPath": "/var/lib/docker/containers/ae750ceb959b3b185a87925748a821dda02d033dac6848c57030347d6edbda71/ae750ceb959b3b185a87925748a821dda02d033dac6848c57030347d6edbda71-json.log",
	        "Name": "/old-k8s-version-987573",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-987573:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-987573",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ae750ceb959b3b185a87925748a821dda02d033dac6848c57030347d6edbda71",
	                "LowerDir": "/var/lib/docker/overlay2/67a2eee882f8978eae59dbc9ac2c8a6169ceac2cd04882ad01ae0421935fe202-init/diff:/var/lib/docker/overlay2/10dcbd04dd53463a9d8dc947ef22df9efa1b3a4d07707317c3869fa39d5b22a7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/67a2eee882f8978eae59dbc9ac2c8a6169ceac2cd04882ad01ae0421935fe202/merged",
	                "UpperDir": "/var/lib/docker/overlay2/67a2eee882f8978eae59dbc9ac2c8a6169ceac2cd04882ad01ae0421935fe202/diff",
	                "WorkDir": "/var/lib/docker/overlay2/67a2eee882f8978eae59dbc9ac2c8a6169ceac2cd04882ad01ae0421935fe202/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-987573",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-987573/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-987573",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-987573",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-987573",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "17e03c5e389717bc05f7a2427c9b292a19f2d4f7bf1480acc544b4d4c621f4c1",
	            "SandboxKey": "/var/run/docker/netns/17e03c5e3897",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33108"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33109"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33112"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33111"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-987573": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4d7fb52c0aef23ee545f7da5c971e8a8676f2221b08ae8d87614b0f88b577986",
	                    "EndpointID": "4f8b7b3e81a725a6a839f84ec17adb0209131308bf240214d3974abb607b26b9",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "a2:a0:b3:db:af:54",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-987573",
	                        "ae750ceb959b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-987573 -n old-k8s-version-987573
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-987573 -n old-k8s-version-987573: exit status 2 (307.651439ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-987573 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-987573 logs -n 25: (1.06492866s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-001617 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-001617                │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │ 19 Nov 25 02:42 UTC │
	│ ssh     │ -p bridge-001617 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-001617                │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │ 19 Nov 25 02:42 UTC │
	│ ssh     │ -p bridge-001617 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-001617                │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │ 19 Nov 25 02:42 UTC │
	│ ssh     │ -p bridge-001617 sudo crio config                                                                                                                                                                                                             │ bridge-001617                │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │ 19 Nov 25 02:42 UTC │
	│ delete  │ -p bridge-001617                                                                                                                                                                                                                              │ bridge-001617                │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │ 19 Nov 25 02:42 UTC │
	│ delete  │ -p disable-driver-mounts-682232                                                                                                                                                                                                               │ disable-driver-mounts-682232 │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │ 19 Nov 25 02:42 UTC │
	│ start   │ -p default-k8s-diff-port-167150 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-167150 │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │ 19 Nov 25 02:43 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-987573 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-987573       │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │                     │
	│ stop    │ -p old-k8s-version-987573 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-987573       │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:43 UTC │
	│ addons  │ enable metrics-server -p embed-certs-811173 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-811173           │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │                     │
	│ stop    │ -p embed-certs-811173 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-811173           │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:43 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-167150 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-167150 │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-167150 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-167150 │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:43 UTC │
	│ addons  │ enable metrics-server -p no-preload-837474 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-837474            │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │                     │
	│ addons  │ enable dashboard -p old-k8s-version-987573 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-987573       │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:43 UTC │
	│ start   │ -p old-k8s-version-987573 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-987573       │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:44 UTC │
	│ stop    │ -p no-preload-837474 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-837474            │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:43 UTC │
	│ addons  │ enable dashboard -p embed-certs-811173 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-811173           │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:43 UTC │
	│ start   │ -p embed-certs-811173 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-811173           │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-167150 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-167150 │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:43 UTC │
	│ start   │ -p default-k8s-diff-port-167150 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-167150 │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-837474 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-837474            │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:43 UTC │
	│ start   │ -p no-preload-837474 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-837474            │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │                     │
	│ image   │ old-k8s-version-987573 image list --format=json                                                                                                                                                                                               │ old-k8s-version-987573       │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │ 19 Nov 25 02:44 UTC │
	│ pause   │ -p old-k8s-version-987573 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-987573       │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 02:43:55
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 02:43:55.428473  322722 out.go:360] Setting OutFile to fd 1 ...
	I1119 02:43:55.428738  322722 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:43:55.428748  322722 out.go:374] Setting ErrFile to fd 2...
	I1119 02:43:55.428752  322722 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:43:55.428986  322722 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-11126/.minikube/bin
	I1119 02:43:55.429462  322722 out.go:368] Setting JSON to false
	I1119 02:43:55.430538  322722 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5182,"bootTime":1763515053,"procs":301,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 02:43:55.430632  322722 start.go:143] virtualization: kvm guest
	I1119 02:43:55.432528  322722 out.go:179] * [no-preload-837474] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1119 02:43:55.433950  322722 out.go:179]   - MINIKUBE_LOCATION=21924
	I1119 02:43:55.433980  322722 notify.go:221] Checking for updates...
	I1119 02:43:55.436001  322722 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 02:43:55.437466  322722 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21924-11126/kubeconfig
	I1119 02:43:55.438636  322722 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-11126/.minikube
	I1119 02:43:55.439931  322722 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1119 02:43:55.441572  322722 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 02:43:54.285873  320707 ssh_runner.go:195] Run: cat /version.json
	I1119 02:43:54.285916  320707 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-811173
	I1119 02:43:54.285934  320707 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 02:43:54.285997  320707 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-811173
	I1119 02:43:54.304991  320707 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/embed-certs-811173/id_rsa Username:docker}
	I1119 02:43:54.305271  320707 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/embed-certs-811173/id_rsa Username:docker}
	I1119 02:43:54.399265  320707 ssh_runner.go:195] Run: systemctl --version
	I1119 02:43:54.465660  320707 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 02:43:54.506992  320707 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 02:43:54.511582  320707 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 02:43:54.511646  320707 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 02:43:54.520206  320707 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1119 02:43:54.520229  320707 start.go:496] detecting cgroup driver to use...
	I1119 02:43:54.520257  320707 detect.go:190] detected "systemd" cgroup driver on host os
	I1119 02:43:54.520315  320707 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 02:43:54.534914  320707 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 02:43:54.547340  320707 docker.go:218] disabling cri-docker service (if available) ...
	I1119 02:43:54.547391  320707 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 02:43:54.561592  320707 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 02:43:54.575624  320707 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 02:43:54.653367  320707 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 02:43:54.757461  320707 docker.go:234] disabling docker service ...
	I1119 02:43:54.757544  320707 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 02:43:54.772667  320707 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 02:43:54.784628  320707 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 02:43:54.877357  320707 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 02:43:54.971404  320707 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 02:43:54.989534  320707 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 02:43:55.003067  320707 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 02:43:55.003125  320707 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:43:55.011984  320707 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1119 02:43:55.012050  320707 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:43:55.020366  320707 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:43:55.028592  320707 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:43:55.037402  320707 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 02:43:55.046391  320707 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:43:55.064425  320707 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:43:55.076092  320707 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:43:55.091206  320707 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 02:43:55.100105  320707 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 02:43:55.111263  320707 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:43:55.221869  320707 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 02:43:55.350118  320707 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 02:43:55.350185  320707 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 02:43:55.354534  320707 start.go:564] Will wait 60s for crictl version
	I1119 02:43:55.354594  320707 ssh_runner.go:195] Run: which crictl
	I1119 02:43:55.358043  320707 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 02:43:55.382945  320707 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1119 02:43:55.383026  320707 ssh_runner.go:195] Run: crio --version
	I1119 02:43:55.415001  320707 ssh_runner.go:195] Run: crio --version
	I1119 02:43:55.448080  320707 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1119 02:43:55.443088  322722 config.go:182] Loaded profile config "no-preload-837474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:43:55.443554  322722 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 02:43:55.468196  322722 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1119 02:43:55.468281  322722 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 02:43:55.527928  322722 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:79 SystemTime:2025-11-19 02:43:55.516761816 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652060160 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 02:43:55.528029  322722 docker.go:319] overlay module found
	I1119 02:43:55.529631  322722 out.go:179] * Using the docker driver based on existing profile
	I1119 02:43:55.530733  322722 start.go:309] selected driver: docker
	I1119 02:43:55.530744  322722 start.go:930] validating driver "docker" against &{Name:no-preload-837474 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-837474 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:43:55.530824  322722 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 02:43:55.531356  322722 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 02:43:55.591349  322722 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:79 SystemTime:2025-11-19 02:43:55.581212886 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652060160 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 02:43:55.591729  322722 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 02:43:55.591766  322722 cni.go:84] Creating CNI manager for ""
	I1119 02:43:55.591822  322722 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 02:43:55.591869  322722 start.go:353] cluster config:
	{Name:no-preload-837474 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-837474 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:43:55.597458  322722 out.go:179] * Starting "no-preload-837474" primary control-plane node in "no-preload-837474" cluster
	I1119 02:43:55.600595  322722 cache.go:134] Beginning downloading kic base image for docker with crio
	I1119 02:43:55.601763  322722 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1119 02:43:55.602926  322722 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 02:43:55.603020  322722 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1119 02:43:55.603052  322722 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/config.json ...
	I1119 02:43:55.603284  322722 cache.go:107] acquiring lock: {Name:mk0b4a5ed1b254b5d61172b3c33fc894da77be9f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 02:43:55.603313  322722 cache.go:107] acquiring lock: {Name:mkddee0277675ded6b2e43d9db23318e5b303890 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 02:43:55.603418  322722 cache.go:115] /home/jenkins/minikube-integration/21924-11126/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1119 02:43:55.603394  322722 cache.go:107] acquiring lock: {Name:mk52ee23ebdd5f1abc2a7e417a2896e8538de4dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 02:43:55.603450  322722 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21924-11126/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 164.772µs
	I1119 02:43:55.603468  322722 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21924-11126/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1119 02:43:55.603465  322722 cache.go:115] /home/jenkins/minikube-integration/21924-11126/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1119 02:43:55.603460  322722 cache.go:107] acquiring lock: {Name:mk8b71fab168cd41fe90be16c8f6c892544feb60 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 02:43:55.603485  322722 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21924-11126/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 191.551µs
	I1119 02:43:55.603496  322722 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21924-11126/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1119 02:43:55.603455  322722 cache.go:107] acquiring lock: {Name:mk58f7adfb29feece603cf6d9222a90ab24abc38 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 02:43:55.603305  322722 cache.go:107] acquiring lock: {Name:mk1acaa7e17abb35c0a1b36f8014c55ac138b78f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 02:43:55.603530  322722 cache.go:115] /home/jenkins/minikube-integration/21924-11126/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1119 02:43:55.603527  322722 cache.go:115] /home/jenkins/minikube-integration/21924-11126/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1119 02:43:55.603546  322722 cache.go:115] /home/jenkins/minikube-integration/21924-11126/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1119 02:43:55.603540  322722 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21924-11126/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 192.592µs
	I1119 02:43:55.603554  322722 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21924-11126/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 260.582µs
	I1119 02:43:55.603542  322722 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21924-11126/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 85.38µs
	I1119 02:43:55.603559  322722 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21924-11126/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1119 02:43:55.603563  322722 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21924-11126/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1119 02:43:55.603565  322722 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21924-11126/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1119 02:43:55.603289  322722 cache.go:107] acquiring lock: {Name:mkcbdda5a2c225a14d113eb60bf1b63a9f7af468 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 02:43:55.603571  322722 cache.go:115] /home/jenkins/minikube-integration/21924-11126/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1119 02:43:55.603562  322722 cache.go:107] acquiring lock: {Name:mk789e792a18684b36279333d1d2a3790dd7ce3a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 02:43:55.603625  322722 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21924-11126/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 225.502µs
	I1119 02:43:55.603641  322722 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21924-11126/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1119 02:43:55.603641  322722 cache.go:115] /home/jenkins/minikube-integration/21924-11126/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1119 02:43:55.603650  322722 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21924-11126/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 376.33µs
	I1119 02:43:55.603662  322722 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21924-11126/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1119 02:43:55.603716  322722 cache.go:115] /home/jenkins/minikube-integration/21924-11126/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1119 02:43:55.603731  322722 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21924-11126/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 224.573µs
	I1119 02:43:55.603752  322722 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21924-11126/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1119 02:43:55.603767  322722 cache.go:87] Successfully saved all images to host disk.
	I1119 02:43:55.623230  322722 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1119 02:43:55.623252  322722 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1119 02:43:55.623272  322722 cache.go:243] Successfully downloaded all kic artifacts
	I1119 02:43:55.623300  322722 start.go:360] acquireMachinesLock for no-preload-837474: {Name:mk39987c4e02a0b7f1a15807d776065c6d095ec8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 02:43:55.623357  322722 start.go:364] duration metric: took 38.318µs to acquireMachinesLock for "no-preload-837474"
	I1119 02:43:55.623378  322722 start.go:96] Skipping create...Using existing machine configuration
	I1119 02:43:55.623388  322722 fix.go:54] fixHost starting: 
	I1119 02:43:55.623807  322722 cli_runner.go:164] Run: docker container inspect no-preload-837474 --format={{.State.Status}}
	I1119 02:43:55.640610  322722 fix.go:112] recreateIfNeeded on no-preload-837474: state=Stopped err=<nil>
	W1119 02:43:55.640633  322722 fix.go:138] unexpected machine state, will restart: <nil>
	I1119 02:43:55.449157  320707 cli_runner.go:164] Run: docker network inspect embed-certs-811173 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 02:43:55.469701  320707 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1119 02:43:55.474249  320707 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 02:43:55.485867  320707 kubeadm.go:884] updating cluster {Name:embed-certs-811173 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-811173 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 02:43:55.486009  320707 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 02:43:55.486065  320707 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 02:43:55.525836  320707 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 02:43:55.525855  320707 crio.go:433] Images already preloaded, skipping extraction
	I1119 02:43:55.525897  320707 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 02:43:55.553608  320707 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 02:43:55.553634  320707 cache_images.go:86] Images are preloaded, skipping loading
	I1119 02:43:55.553644  320707 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1119 02:43:55.553765  320707 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-811173 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-811173 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 02:43:55.553843  320707 ssh_runner.go:195] Run: crio config
	I1119 02:43:55.602252  320707 cni.go:84] Creating CNI manager for ""
	I1119 02:43:55.602274  320707 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 02:43:55.602288  320707 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 02:43:55.602309  320707 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-811173 NodeName:embed-certs-811173 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 02:43:55.602445  320707 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-811173"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 02:43:55.602516  320707 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 02:43:55.611132  320707 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 02:43:55.611188  320707 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 02:43:55.619384  320707 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1119 02:43:55.632653  320707 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 02:43:55.645483  320707 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1119 02:43:55.658251  320707 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1119 02:43:55.662538  320707 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 02:43:55.672971  320707 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:43:55.755627  320707 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 02:43:55.787084  320707 certs.go:69] Setting up /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173 for IP: 192.168.85.2
	I1119 02:43:55.787108  320707 certs.go:195] generating shared ca certs ...
	I1119 02:43:55.787129  320707 certs.go:227] acquiring lock for ca certs: {Name:mkfa94aafd627ee4c1a185e05aa520339d3c22d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:43:55.787288  320707 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21924-11126/.minikube/ca.key
	I1119 02:43:55.787339  320707 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21924-11126/.minikube/proxy-client-ca.key
	I1119 02:43:55.787349  320707 certs.go:257] generating profile certs ...
	I1119 02:43:55.787497  320707 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/client.key
	I1119 02:43:55.787571  320707 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/apiserver.key.a0a915e4
	I1119 02:43:55.787627  320707 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/proxy-client.key
	I1119 02:43:55.787764  320707 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/14634.pem (1338 bytes)
	W1119 02:43:55.787816  320707 certs.go:480] ignoring /home/jenkins/minikube-integration/21924-11126/.minikube/certs/14634_empty.pem, impossibly tiny 0 bytes
	I1119 02:43:55.787831  320707 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca-key.pem (1671 bytes)
	I1119 02:43:55.787865  320707 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem (1082 bytes)
	I1119 02:43:55.787898  320707 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/cert.pem (1123 bytes)
	I1119 02:43:55.787928  320707 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/key.pem (1675 bytes)
	I1119 02:43:55.787995  320707 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/files/etc/ssl/certs/146342.pem (1708 bytes)
	I1119 02:43:55.788908  320707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 02:43:55.814136  320707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1119 02:43:55.841341  320707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 02:43:55.862047  320707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1119 02:43:55.888515  320707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1119 02:43:55.906292  320707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1119 02:43:55.923009  320707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 02:43:55.940508  320707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/embed-certs-811173/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1119 02:43:55.957798  320707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 02:43:55.977304  320707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/certs/14634.pem --> /usr/share/ca-certificates/14634.pem (1338 bytes)
	I1119 02:43:55.998424  320707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/files/etc/ssl/certs/146342.pem --> /usr/share/ca-certificates/146342.pem (1708 bytes)
	I1119 02:43:56.018715  320707 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 02:43:56.031184  320707 ssh_runner.go:195] Run: openssl version
	I1119 02:43:56.037204  320707 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 02:43:56.045741  320707 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:43:56.049423  320707 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 01:56 /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:43:56.049479  320707 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:43:56.085397  320707 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 02:43:56.093019  320707 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14634.pem && ln -fs /usr/share/ca-certificates/14634.pem /etc/ssl/certs/14634.pem"
	I1119 02:43:56.104553  320707 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14634.pem
	I1119 02:43:56.110028  320707 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 02:02 /usr/share/ca-certificates/14634.pem
	I1119 02:43:56.110076  320707 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14634.pem
	I1119 02:43:56.144840  320707 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14634.pem /etc/ssl/certs/51391683.0"
	I1119 02:43:56.152756  320707 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/146342.pem && ln -fs /usr/share/ca-certificates/146342.pem /etc/ssl/certs/146342.pem"
	I1119 02:43:56.160727  320707 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/146342.pem
	I1119 02:43:56.164354  320707 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 02:02 /usr/share/ca-certificates/146342.pem
	I1119 02:43:56.164395  320707 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/146342.pem
	I1119 02:43:56.208726  320707 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/146342.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 02:43:56.216934  320707 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 02:43:56.221328  320707 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1119 02:43:56.266359  320707 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1119 02:43:56.306134  320707 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1119 02:43:56.356664  320707 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1119 02:43:56.411481  320707 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1119 02:43:56.469333  320707 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1119 02:43:56.506095  320707 kubeadm.go:401] StartCluster: {Name:embed-certs-811173 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-811173 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:43:56.506191  320707 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 02:43:56.506243  320707 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 02:43:56.538459  320707 cri.go:89] found id: "e0994ea94767873e5f7aa16af71ef5155fc15391a563da35948cadb1520f80bd"
	I1119 02:43:56.538482  320707 cri.go:89] found id: "b9603bf135a48a7fd7f1a7df00bc5ac2ca325854631a2e9109eebbe9c579c3fc"
	I1119 02:43:56.538487  320707 cri.go:89] found id: "05974f8fe2ed9b3af8b149d271de0fd120542bca0e181f00cc290f0684748003"
	I1119 02:43:56.538490  320707 cri.go:89] found id: "706b2dbda2d38ebc2ca3e61f6b17e96a3d75c375c204a2bcebbf88ede678a129"
	I1119 02:43:56.538494  320707 cri.go:89] found id: ""
	I1119 02:43:56.538542  320707 ssh_runner.go:195] Run: sudo runc list -f json
	W1119 02:43:56.550639  320707 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:43:56Z" level=error msg="open /run/runc: no such file or directory"
	I1119 02:43:56.550691  320707 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 02:43:56.558911  320707 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1119 02:43:56.558927  320707 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1119 02:43:56.558968  320707 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1119 02:43:56.566586  320707 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1119 02:43:56.567107  320707 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-811173" does not appear in /home/jenkins/minikube-integration/21924-11126/kubeconfig
	I1119 02:43:56.567372  320707 kubeconfig.go:62] /home/jenkins/minikube-integration/21924-11126/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-811173" cluster setting kubeconfig missing "embed-certs-811173" context setting]
	I1119 02:43:56.567809  320707 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/kubeconfig: {Name:mkb0d0ec188d51add3fa67ae624c9c11581068d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:43:56.569379  320707 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1119 02:43:56.577463  320707 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1119 02:43:56.577490  320707 kubeadm.go:602] duration metric: took 18.556909ms to restartPrimaryControlPlane
	I1119 02:43:56.577499  320707 kubeadm.go:403] duration metric: took 71.414046ms to StartCluster
	I1119 02:43:56.577514  320707 settings.go:142] acquiring lock: {Name:mk39bd61177f2f8808b442f427cccc3032092975 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:43:56.577579  320707 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21924-11126/kubeconfig
	I1119 02:43:56.579038  320707 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/kubeconfig: {Name:mkb0d0ec188d51add3fa67ae624c9c11581068d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:43:56.579269  320707 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 02:43:56.579330  320707 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 02:43:56.579439  320707 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-811173"
	I1119 02:43:56.579458  320707 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-811173"
	W1119 02:43:56.579466  320707 addons.go:248] addon storage-provisioner should already be in state true
	I1119 02:43:56.579465  320707 addons.go:70] Setting dashboard=true in profile "embed-certs-811173"
	I1119 02:43:56.579488  320707 addons.go:239] Setting addon dashboard=true in "embed-certs-811173"
	I1119 02:43:56.579494  320707 host.go:66] Checking if "embed-certs-811173" exists ...
	W1119 02:43:56.579499  320707 addons.go:248] addon dashboard should already be in state true
	I1119 02:43:56.579516  320707 config.go:182] Loaded profile config "embed-certs-811173": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:43:56.579533  320707 host.go:66] Checking if "embed-certs-811173" exists ...
	I1119 02:43:56.579563  320707 addons.go:70] Setting default-storageclass=true in profile "embed-certs-811173"
	I1119 02:43:56.579580  320707 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-811173"
	I1119 02:43:56.579846  320707 cli_runner.go:164] Run: docker container inspect embed-certs-811173 --format={{.State.Status}}
	I1119 02:43:56.579978  320707 cli_runner.go:164] Run: docker container inspect embed-certs-811173 --format={{.State.Status}}
	I1119 02:43:56.580011  320707 cli_runner.go:164] Run: docker container inspect embed-certs-811173 --format={{.State.Status}}
	I1119 02:43:56.581204  320707 out.go:179] * Verifying Kubernetes components...
	I1119 02:43:56.582446  320707 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:43:56.605762  320707 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 02:43:56.605810  320707 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1119 02:43:56.606576  320707 addons.go:239] Setting addon default-storageclass=true in "embed-certs-811173"
	W1119 02:43:56.606598  320707 addons.go:248] addon default-storageclass should already be in state true
	I1119 02:43:56.606623  320707 host.go:66] Checking if "embed-certs-811173" exists ...
	I1119 02:43:56.607082  320707 cli_runner.go:164] Run: docker container inspect embed-certs-811173 --format={{.State.Status}}
	I1119 02:43:56.607213  320707 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 02:43:56.607232  320707 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 02:43:56.607294  320707 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-811173
	I1119 02:43:56.610564  320707 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	W1119 02:43:52.207759  317113 pod_ready.go:104] pod "coredns-5dd5756b68-djd8r" is not "Ready", error: node "old-k8s-version-987573" hosting pod "coredns-5dd5756b68-djd8r" is not "Ready" (will retry)
	W1119 02:43:54.208568  317113 pod_ready.go:104] pod "coredns-5dd5756b68-djd8r" is not "Ready", error: node "old-k8s-version-987573" hosting pod "coredns-5dd5756b68-djd8r" is not "Ready" (will retry)
	I1119 02:43:56.708080  317113 pod_ready.go:94] pod "coredns-5dd5756b68-djd8r" is "Ready"
	I1119 02:43:56.708112  317113 pod_ready.go:86] duration metric: took 9.006187672s for pod "coredns-5dd5756b68-djd8r" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:56.711315  317113 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-987573" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:43:56.611723  320707 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1119 02:43:56.611741  320707 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1119 02:43:56.611792  320707 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-811173
	I1119 02:43:56.640805  320707 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/embed-certs-811173/id_rsa Username:docker}
	I1119 02:43:56.641296  320707 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 02:43:56.641311  320707 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 02:43:56.641364  320707 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-811173
	I1119 02:43:56.646841  320707 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/embed-certs-811173/id_rsa Username:docker}
	I1119 02:43:56.666259  320707 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/embed-certs-811173/id_rsa Username:docker}
	I1119 02:43:56.726501  320707 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 02:43:56.739974  320707 node_ready.go:35] waiting up to 6m0s for node "embed-certs-811173" to be "Ready" ...
	I1119 02:43:56.750619  320707 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 02:43:56.759982  320707 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1119 02:43:56.760032  320707 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1119 02:43:56.775475  320707 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1119 02:43:56.775497  320707 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1119 02:43:56.777383  320707 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 02:43:56.793034  320707 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1119 02:43:56.793054  320707 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1119 02:43:56.810675  320707 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1119 02:43:56.810695  320707 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1119 02:43:56.827797  320707 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1119 02:43:56.827818  320707 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1119 02:43:56.841837  320707 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1119 02:43:56.841863  320707 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1119 02:43:56.854192  320707 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1119 02:43:56.854211  320707 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1119 02:43:56.866106  320707 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1119 02:43:56.866123  320707 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1119 02:43:56.877864  320707 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1119 02:43:56.877884  320707 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1119 02:43:56.889684  320707 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1119 02:43:58.034685  320707 node_ready.go:49] node "embed-certs-811173" is "Ready"
	I1119 02:43:58.034733  320707 node_ready.go:38] duration metric: took 1.294722255s for node "embed-certs-811173" to be "Ready" ...
	I1119 02:43:58.034751  320707 api_server.go:52] waiting for apiserver process to appear ...
	I1119 02:43:58.034822  320707 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 02:43:58.589508  320707 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.83885118s)
	I1119 02:43:58.589595  320707 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.812177747s)
	I1119 02:43:58.589720  320707 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.700002287s)
	I1119 02:43:58.589745  320707 api_server.go:72] duration metric: took 2.010447222s to wait for apiserver process to appear ...
	I1119 02:43:58.589761  320707 api_server.go:88] waiting for apiserver healthz status ...
	I1119 02:43:58.589781  320707 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1119 02:43:58.591316  320707 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-811173 addons enable metrics-server
	
	I1119 02:43:58.597926  320707 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 02:43:58.597955  320707 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 02:43:58.604106  320707 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1119 02:43:58.605045  320707 addons.go:515] duration metric: took 2.02572569s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1119 02:43:59.090526  320707 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1119 02:43:59.095309  320707 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 02:43:59.095340  320707 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 02:43:54.544732  321785 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-167150" ...
	I1119 02:43:54.544793  321785 cli_runner.go:164] Run: docker start default-k8s-diff-port-167150
	I1119 02:43:54.848898  321785 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-167150 --format={{.State.Status}}
	I1119 02:43:54.869258  321785 kic.go:430] container "default-k8s-diff-port-167150" state is running.
	I1119 02:43:54.869657  321785 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-167150
	I1119 02:43:54.889310  321785 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/config.json ...
	I1119 02:43:54.889538  321785 machine.go:94] provisionDockerMachine start ...
	I1119 02:43:54.889606  321785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-167150
	I1119 02:43:54.914941  321785 main.go:143] libmachine: Using SSH client type: native
	I1119 02:43:54.915282  321785 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1119 02:43:54.915302  321785 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 02:43:54.916085  321785 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45204->127.0.0.1:33118: read: connection reset by peer
	I1119 02:43:58.068557  321785 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-167150
	
	I1119 02:43:58.068611  321785 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-167150"
	I1119 02:43:58.068771  321785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-167150
	I1119 02:43:58.093829  321785 main.go:143] libmachine: Using SSH client type: native
	I1119 02:43:58.094063  321785 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1119 02:43:58.094073  321785 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-167150 && echo "default-k8s-diff-port-167150" | sudo tee /etc/hostname
	I1119 02:43:58.245203  321785 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-167150
	
	I1119 02:43:58.245283  321785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-167150
	I1119 02:43:58.267476  321785 main.go:143] libmachine: Using SSH client type: native
	I1119 02:43:58.267925  321785 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1119 02:43:58.267962  321785 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-167150' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-167150/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-167150' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 02:43:58.407057  321785 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 02:43:58.407086  321785 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21924-11126/.minikube CaCertPath:/home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21924-11126/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21924-11126/.minikube}
	I1119 02:43:58.407110  321785 ubuntu.go:190] setting up certificates
	I1119 02:43:58.407122  321785 provision.go:84] configureAuth start
	I1119 02:43:58.407196  321785 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-167150
	I1119 02:43:58.428099  321785 provision.go:143] copyHostCerts
	I1119 02:43:58.428188  321785 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-11126/.minikube/ca.pem, removing ...
	I1119 02:43:58.428206  321785 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-11126/.minikube/ca.pem
	I1119 02:43:58.428287  321785 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21924-11126/.minikube/ca.pem (1082 bytes)
	I1119 02:43:58.428413  321785 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-11126/.minikube/cert.pem, removing ...
	I1119 02:43:58.428424  321785 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-11126/.minikube/cert.pem
	I1119 02:43:58.428490  321785 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21924-11126/.minikube/cert.pem (1123 bytes)
	I1119 02:43:58.428621  321785 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-11126/.minikube/key.pem, removing ...
	I1119 02:43:58.428634  321785 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-11126/.minikube/key.pem
	I1119 02:43:58.428686  321785 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21924-11126/.minikube/key.pem (1675 bytes)
	I1119 02:43:58.428792  321785 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21924-11126/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-167150 san=[127.0.0.1 192.168.94.2 default-k8s-diff-port-167150 localhost minikube]
	I1119 02:43:58.832957  321785 provision.go:177] copyRemoteCerts
	I1119 02:43:58.833014  321785 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 02:43:58.833055  321785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-167150
	I1119 02:43:58.850615  321785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/default-k8s-diff-port-167150/id_rsa Username:docker}
	I1119 02:43:58.951401  321785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1119 02:43:58.971768  321785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1119 02:43:58.989607  321785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1119 02:43:59.005852  321785 provision.go:87] duration metric: took 598.718896ms to configureAuth
	I1119 02:43:59.005897  321785 ubuntu.go:206] setting minikube options for container-runtime
	I1119 02:43:59.006096  321785 config.go:182] Loaded profile config "default-k8s-diff-port-167150": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:43:59.006211  321785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-167150
	I1119 02:43:59.026841  321785 main.go:143] libmachine: Using SSH client type: native
	I1119 02:43:59.027142  321785 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1119 02:43:59.027169  321785 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 02:43:55.643058  322722 out.go:252] * Restarting existing docker container for "no-preload-837474" ...
	I1119 02:43:55.643109  322722 cli_runner.go:164] Run: docker start no-preload-837474
	I1119 02:43:55.963827  322722 cli_runner.go:164] Run: docker container inspect no-preload-837474 --format={{.State.Status}}
	I1119 02:43:55.984623  322722 kic.go:430] container "no-preload-837474" state is running.
	I1119 02:43:55.985077  322722 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-837474
	I1119 02:43:56.004259  322722 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/config.json ...
	I1119 02:43:56.004477  322722 machine.go:94] provisionDockerMachine start ...
	I1119 02:43:56.004558  322722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-837474
	I1119 02:43:56.023538  322722 main.go:143] libmachine: Using SSH client type: native
	I1119 02:43:56.023843  322722 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1119 02:43:56.023870  322722 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 02:43:56.024501  322722 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35268->127.0.0.1:33123: read: connection reset by peer
	I1119 02:43:59.168317  322722 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-837474
	
	I1119 02:43:59.168363  322722 ubuntu.go:182] provisioning hostname "no-preload-837474"
	I1119 02:43:59.168426  322722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-837474
	I1119 02:43:59.193581  322722 main.go:143] libmachine: Using SSH client type: native
	I1119 02:43:59.193913  322722 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1119 02:43:59.193929  322722 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-837474 && echo "no-preload-837474" | sudo tee /etc/hostname
	I1119 02:43:59.358400  322722 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-837474
	
	I1119 02:43:59.358511  322722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-837474
	I1119 02:43:59.379495  322722 main.go:143] libmachine: Using SSH client type: native
	I1119 02:43:59.379699  322722 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1119 02:43:59.379723  322722 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-837474' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-837474/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-837474' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 02:43:59.514602  322722 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 02:43:59.514631  322722 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21924-11126/.minikube CaCertPath:/home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21924-11126/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21924-11126/.minikube}
	I1119 02:43:59.514653  322722 ubuntu.go:190] setting up certificates
	I1119 02:43:59.514666  322722 provision.go:84] configureAuth start
	I1119 02:43:59.514742  322722 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-837474
	I1119 02:43:59.533401  322722 provision.go:143] copyHostCerts
	I1119 02:43:59.533471  322722 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-11126/.minikube/ca.pem, removing ...
	I1119 02:43:59.533484  322722 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-11126/.minikube/ca.pem
	I1119 02:43:59.533561  322722 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21924-11126/.minikube/ca.pem (1082 bytes)
	I1119 02:43:59.533699  322722 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-11126/.minikube/cert.pem, removing ...
	I1119 02:43:59.533710  322722 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-11126/.minikube/cert.pem
	I1119 02:43:59.533751  322722 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21924-11126/.minikube/cert.pem (1123 bytes)
	I1119 02:43:59.533851  322722 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-11126/.minikube/key.pem, removing ...
	I1119 02:43:59.533862  322722 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-11126/.minikube/key.pem
	I1119 02:43:59.533895  322722 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21924-11126/.minikube/key.pem (1675 bytes)
	I1119 02:43:59.533979  322722 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21924-11126/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca-key.pem org=jenkins.no-preload-837474 san=[127.0.0.1 192.168.103.2 localhost minikube no-preload-837474]
	I1119 02:43:59.731844  322722 provision.go:177] copyRemoteCerts
	I1119 02:43:59.731935  322722 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 02:43:59.731990  322722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-837474
	I1119 02:43:59.756868  322722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/no-preload-837474/id_rsa Username:docker}
	I1119 02:43:59.859188  322722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1119 02:43:59.876559  322722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1119 02:43:59.894074  322722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1119 02:43:59.910722  322722 provision.go:87] duration metric: took 396.042204ms to configureAuth
	I1119 02:43:59.910750  322722 ubuntu.go:206] setting minikube options for container-runtime
	I1119 02:43:59.910921  322722 config.go:182] Loaded profile config "no-preload-837474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:43:59.911020  322722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-837474
	I1119 02:43:59.934165  322722 main.go:143] libmachine: Using SSH client type: native
	I1119 02:43:59.934501  322722 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1119 02:43:59.934528  322722 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 02:44:00.278168  322722 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 02:44:00.278195  322722 machine.go:97] duration metric: took 4.273701801s to provisionDockerMachine
	I1119 02:44:00.278208  322722 start.go:293] postStartSetup for "no-preload-837474" (driver="docker")
	I1119 02:44:00.278221  322722 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 02:44:00.278294  322722 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 02:44:00.278343  322722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-837474
	I1119 02:44:00.305893  322722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/no-preload-837474/id_rsa Username:docker}
	I1119 02:44:00.400223  322722 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 02:44:00.403578  322722 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 02:44:00.403608  322722 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 02:44:00.403620  322722 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-11126/.minikube/addons for local assets ...
	I1119 02:44:00.403673  322722 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-11126/.minikube/files for local assets ...
	I1119 02:44:00.403766  322722 filesync.go:149] local asset: /home/jenkins/minikube-integration/21924-11126/.minikube/files/etc/ssl/certs/146342.pem -> 146342.pem in /etc/ssl/certs
	I1119 02:44:00.403884  322722 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 02:44:00.411198  322722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/files/etc/ssl/certs/146342.pem --> /etc/ssl/certs/146342.pem (1708 bytes)
	I1119 02:44:00.428413  322722 start.go:296] duration metric: took 150.192775ms for postStartSetup
	I1119 02:44:00.428491  322722 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 02:44:00.428524  322722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-837474
	I1119 02:43:59.399187  321785 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 02:43:59.399216  321785 machine.go:97] duration metric: took 4.509661678s to provisionDockerMachine
	I1119 02:43:59.399231  321785 start.go:293] postStartSetup for "default-k8s-diff-port-167150" (driver="docker")
	I1119 02:43:59.399245  321785 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 02:43:59.399309  321785 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 02:43:59.399358  321785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-167150
	I1119 02:43:59.418696  321785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/default-k8s-diff-port-167150/id_rsa Username:docker}
	I1119 02:43:59.517864  321785 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 02:43:59.521401  321785 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 02:43:59.521462  321785 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 02:43:59.521476  321785 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-11126/.minikube/addons for local assets ...
	I1119 02:43:59.521529  321785 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-11126/.minikube/files for local assets ...
	I1119 02:43:59.521628  321785 filesync.go:149] local asset: /home/jenkins/minikube-integration/21924-11126/.minikube/files/etc/ssl/certs/146342.pem -> 146342.pem in /etc/ssl/certs
	I1119 02:43:59.521736  321785 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 02:43:59.529278  321785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/files/etc/ssl/certs/146342.pem --> /etc/ssl/certs/146342.pem (1708 bytes)
	I1119 02:43:59.548188  321785 start.go:296] duration metric: took 148.944112ms for postStartSetup
	I1119 02:43:59.548256  321785 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 02:43:59.548301  321785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-167150
	I1119 02:43:59.568911  321785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/default-k8s-diff-port-167150/id_rsa Username:docker}
	I1119 02:43:59.661301  321785 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 02:43:59.665639  321785 fix.go:56] duration metric: took 5.140061101s for fixHost
	I1119 02:43:59.665665  321785 start.go:83] releasing machines lock for "default-k8s-diff-port-167150", held for 5.140105804s
	I1119 02:43:59.665730  321785 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-167150
	I1119 02:43:59.688059  321785 ssh_runner.go:195] Run: cat /version.json
	I1119 02:43:59.688100  321785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-167150
	I1119 02:43:59.688149  321785 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 02:43:59.688215  321785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-167150
	I1119 02:43:59.708829  321785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/default-k8s-diff-port-167150/id_rsa Username:docker}
	I1119 02:43:59.709773  321785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/default-k8s-diff-port-167150/id_rsa Username:docker}
	I1119 02:43:59.806678  321785 ssh_runner.go:195] Run: systemctl --version
	I1119 02:43:59.860389  321785 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 02:43:59.894335  321785 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 02:43:59.898990  321785 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 02:43:59.899045  321785 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 02:43:59.906344  321785 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1119 02:43:59.906364  321785 start.go:496] detecting cgroup driver to use...
	I1119 02:43:59.906390  321785 detect.go:190] detected "systemd" cgroup driver on host os
	I1119 02:43:59.906453  321785 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 02:43:59.923664  321785 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 02:43:59.938620  321785 docker.go:218] disabling cri-docker service (if available) ...
	I1119 02:43:59.938684  321785 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 02:43:59.954745  321785 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 02:43:59.969039  321785 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 02:44:00.056088  321785 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 02:44:00.155244  321785 docker.go:234] disabling docker service ...
	I1119 02:44:00.155300  321785 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 02:44:00.173143  321785 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 02:44:00.187714  321785 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 02:44:00.270749  321785 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 02:44:00.353802  321785 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 02:44:00.365520  321785 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 02:44:00.379891  321785 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 02:44:00.379949  321785 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:44:00.388308  321785 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1119 02:44:00.388357  321785 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:44:00.396576  321785 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:44:00.405318  321785 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:44:00.413548  321785 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 02:44:00.421252  321785 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:44:00.431408  321785 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:44:00.439519  321785 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:44:00.448740  321785 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 02:44:00.455638  321785 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 02:44:00.462608  321785 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:44:00.544149  321785 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 02:44:00.683768  321785 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 02:44:00.683840  321785 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 02:44:00.688133  321785 start.go:564] Will wait 60s for crictl version
	I1119 02:44:00.688189  321785 ssh_runner.go:195] Run: which crictl
	I1119 02:44:00.692065  321785 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 02:44:00.716044  321785 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1119 02:44:00.716120  321785 ssh_runner.go:195] Run: crio --version
	I1119 02:44:00.745396  321785 ssh_runner.go:195] Run: crio --version
	I1119 02:44:00.775063  321785 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1119 02:44:00.447787  322722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/no-preload-837474/id_rsa Username:docker}
	I1119 02:44:00.543664  322722 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 02:44:00.548285  322722 fix.go:56] duration metric: took 4.92489252s for fixHost
	I1119 02:44:00.548311  322722 start.go:83] releasing machines lock for "no-preload-837474", held for 4.924940102s
	I1119 02:44:00.548377  322722 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-837474
	I1119 02:44:00.567418  322722 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 02:44:00.567508  322722 ssh_runner.go:195] Run: cat /version.json
	I1119 02:44:00.567548  322722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-837474
	I1119 02:44:00.567563  322722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-837474
	I1119 02:44:00.587014  322722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/no-preload-837474/id_rsa Username:docker}
	I1119 02:44:00.588024  322722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/no-preload-837474/id_rsa Username:docker}
	I1119 02:44:00.681657  322722 ssh_runner.go:195] Run: systemctl --version
	I1119 02:44:00.742242  322722 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 02:44:00.778533  322722 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 02:44:00.783844  322722 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 02:44:00.783902  322722 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 02:44:00.792717  322722 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1119 02:44:00.792741  322722 start.go:496] detecting cgroup driver to use...
	I1119 02:44:00.792774  322722 detect.go:190] detected "systemd" cgroup driver on host os
	I1119 02:44:00.792822  322722 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 02:44:00.808286  322722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 02:44:00.820699  322722 docker.go:218] disabling cri-docker service (if available) ...
	I1119 02:44:00.820754  322722 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 02:44:00.838152  322722 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 02:44:00.853523  322722 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 02:44:00.961698  322722 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 02:44:01.076515  322722 docker.go:234] disabling docker service ...
	I1119 02:44:01.076582  322722 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 02:44:01.095571  322722 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 02:44:01.108321  322722 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 02:44:01.208624  322722 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 02:44:01.330008  322722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 02:44:01.345790  322722 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 02:44:01.361656  322722 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 02:44:01.361714  322722 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:44:01.370790  322722 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1119 02:44:01.370846  322722 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:44:01.380584  322722 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:44:01.390006  322722 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:44:01.399802  322722 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 02:44:01.408671  322722 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:44:01.419067  322722 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:44:01.428291  322722 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:44:01.437373  322722 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 02:44:01.444629  322722 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 02:44:01.452824  322722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:44:01.552680  322722 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 02:44:01.734287  322722 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 02:44:01.734362  322722 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 02:44:01.740011  322722 start.go:564] Will wait 60s for crictl version
	I1119 02:44:01.740146  322722 ssh_runner.go:195] Run: which crictl
	I1119 02:44:01.744551  322722 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 02:44:01.773153  322722 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1119 02:44:01.773234  322722 ssh_runner.go:195] Run: crio --version
	I1119 02:44:01.817082  322722 ssh_runner.go:195] Run: crio --version
	I1119 02:44:01.864750  322722 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	W1119 02:43:58.715760  317113 pod_ready.go:104] pod "etcd-old-k8s-version-987573" is not "Ready", error: <nil>
	W1119 02:44:00.717005  317113 pod_ready.go:104] pod "etcd-old-k8s-version-987573" is not "Ready", error: <nil>
	I1119 02:44:01.218344  317113 pod_ready.go:94] pod "etcd-old-k8s-version-987573" is "Ready"
	I1119 02:44:01.218388  317113 pod_ready.go:86] duration metric: took 4.507040325s for pod "etcd-old-k8s-version-987573" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:01.223345  317113 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-987573" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:01.229114  317113 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-987573" is "Ready"
	I1119 02:44:01.229141  317113 pod_ready.go:86] duration metric: took 5.771752ms for pod "kube-apiserver-old-k8s-version-987573" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:01.232611  317113 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-987573" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:01.237839  317113 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-987573" is "Ready"
	I1119 02:44:01.237862  317113 pod_ready.go:86] duration metric: took 5.22401ms for pod "kube-controller-manager-old-k8s-version-987573" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:01.241010  317113 pod_ready.go:83] waiting for pod "kube-proxy-tmqhk" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:01.415680  317113 pod_ready.go:94] pod "kube-proxy-tmqhk" is "Ready"
	I1119 02:44:01.415706  317113 pod_ready.go:86] duration metric: took 174.671754ms for pod "kube-proxy-tmqhk" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:01.615862  317113 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-987573" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:00.776476  321785 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-167150 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 02:44:00.796310  321785 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1119 02:44:00.800290  321785 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 02:44:00.810096  321785 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-167150 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-167150 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 02:44:00.810253  321785 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 02:44:00.810315  321785 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 02:44:00.849031  321785 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 02:44:00.849054  321785 crio.go:433] Images already preloaded, skipping extraction
	I1119 02:44:00.849120  321785 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 02:44:00.882209  321785 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 02:44:00.882231  321785 cache_images.go:86] Images are preloaded, skipping loading
	I1119 02:44:00.882239  321785 kubeadm.go:935] updating node { 192.168.94.2 8444 v1.34.1 crio true true} ...
	I1119 02:44:00.882332  321785 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-167150 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-167150 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 02:44:00.882412  321785 ssh_runner.go:195] Run: crio config
	I1119 02:44:00.949078  321785 cni.go:84] Creating CNI manager for ""
	I1119 02:44:00.949109  321785 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 02:44:00.949131  321785 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 02:44:00.949161  321785 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-167150 NodeName:default-k8s-diff-port-167150 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 02:44:00.949340  321785 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-167150"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 02:44:00.949417  321785 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 02:44:00.958676  321785 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 02:44:00.958740  321785 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 02:44:00.967278  321785 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1119 02:44:00.985547  321785 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 02:44:01.006153  321785 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1119 02:44:01.022191  321785 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1119 02:44:01.026556  321785 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 02:44:01.040900  321785 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:44:01.145157  321785 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 02:44:01.178937  321785 certs.go:69] Setting up /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150 for IP: 192.168.94.2
	I1119 02:44:01.178956  321785 certs.go:195] generating shared ca certs ...
	I1119 02:44:01.178986  321785 certs.go:227] acquiring lock for ca certs: {Name:mkfa94aafd627ee4c1a185e05aa520339d3c22d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:44:01.179197  321785 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21924-11126/.minikube/ca.key
	I1119 02:44:01.179258  321785 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21924-11126/.minikube/proxy-client-ca.key
	I1119 02:44:01.179272  321785 certs.go:257] generating profile certs ...
	I1119 02:44:01.179376  321785 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/client.key
	I1119 02:44:01.179478  321785 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/apiserver.key.c3ecd8f4
	I1119 02:44:01.179534  321785 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/proxy-client.key
	I1119 02:44:01.179689  321785 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/14634.pem (1338 bytes)
	W1119 02:44:01.179732  321785 certs.go:480] ignoring /home/jenkins/minikube-integration/21924-11126/.minikube/certs/14634_empty.pem, impossibly tiny 0 bytes
	I1119 02:44:01.179747  321785 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca-key.pem (1671 bytes)
	I1119 02:44:01.179786  321785 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem (1082 bytes)
	I1119 02:44:01.179837  321785 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/cert.pem (1123 bytes)
	I1119 02:44:01.179873  321785 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/key.pem (1675 bytes)
	I1119 02:44:01.179933  321785 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/files/etc/ssl/certs/146342.pem (1708 bytes)
	I1119 02:44:01.180613  321785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 02:44:01.200998  321785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1119 02:44:01.224874  321785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 02:44:01.250349  321785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1119 02:44:01.276007  321785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1119 02:44:01.313169  321785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1119 02:44:01.333954  321785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 02:44:01.357045  321785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/default-k8s-diff-port-167150/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1119 02:44:01.374673  321785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 02:44:01.393980  321785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/certs/14634.pem --> /usr/share/ca-certificates/14634.pem (1338 bytes)
	I1119 02:44:01.414242  321785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/files/etc/ssl/certs/146342.pem --> /usr/share/ca-certificates/146342.pem (1708 bytes)
	I1119 02:44:01.434091  321785 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 02:44:01.447113  321785 ssh_runner.go:195] Run: openssl version
	I1119 02:44:01.453537  321785 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 02:44:01.461547  321785 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:44:01.464946  321785 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 01:56 /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:44:01.464992  321785 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:44:01.514904  321785 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 02:44:01.524612  321785 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14634.pem && ln -fs /usr/share/ca-certificates/14634.pem /etc/ssl/certs/14634.pem"
	I1119 02:44:01.535794  321785 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14634.pem
	I1119 02:44:01.540374  321785 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 02:02 /usr/share/ca-certificates/14634.pem
	I1119 02:44:01.540455  321785 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14634.pem
	I1119 02:44:01.595144  321785 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14634.pem /etc/ssl/certs/51391683.0"
	I1119 02:44:01.606044  321785 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/146342.pem && ln -fs /usr/share/ca-certificates/146342.pem /etc/ssl/certs/146342.pem"
	I1119 02:44:01.615659  321785 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/146342.pem
	I1119 02:44:01.620077  321785 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 02:02 /usr/share/ca-certificates/146342.pem
	I1119 02:44:01.620136  321785 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/146342.pem
	I1119 02:44:01.663985  321785 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/146342.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 02:44:01.672722  321785 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 02:44:01.676954  321785 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1119 02:44:01.721257  321785 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1119 02:44:01.771902  321785 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1119 02:44:01.831420  321785 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1119 02:44:01.919067  321785 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1119 02:44:01.979121  321785 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1119 02:44:02.041386  321785 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-167150 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-167150 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:44:02.041507  321785 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 02:44:02.041577  321785 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 02:44:02.087330  321785 cri.go:89] found id: "0850d32773d1729f97e0f3baf42d1b3638a7327abc66f584efafbdaa4334a283"
	I1119 02:44:02.087357  321785 cri.go:89] found id: "299bbab984622e99c9bf240099fd1891299f48da807c2b0ab1553ad4885d7c13"
	I1119 02:44:02.087427  321785 cri.go:89] found id: "7cdb91f63703193832fa8fc84ec766b4d87e2ac3e24887dcbcb074dfdac9634d"
	I1119 02:44:02.087450  321785 cri.go:89] found id: "f308d3728814cf13897a458da3b827483ae71b6a4cf2cb0fd38e141e14586a3e"
	I1119 02:44:02.087455  321785 cri.go:89] found id: ""
	I1119 02:44:02.087501  321785 ssh_runner.go:195] Run: sudo runc list -f json
	W1119 02:44:02.108646  321785 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:44:02Z" level=error msg="open /run/runc: no such file or directory"
	I1119 02:44:02.108711  321785 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 02:44:02.120272  321785 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1119 02:44:02.120289  321785 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1119 02:44:02.120331  321785 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1119 02:44:02.129883  321785 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1119 02:44:02.131275  321785 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-167150" does not appear in /home/jenkins/minikube-integration/21924-11126/kubeconfig
	I1119 02:44:02.132241  321785 kubeconfig.go:62] /home/jenkins/minikube-integration/21924-11126/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-167150" cluster setting kubeconfig missing "default-k8s-diff-port-167150" context setting]
	I1119 02:44:02.133753  321785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/kubeconfig: {Name:mkb0d0ec188d51add3fa67ae624c9c11581068d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:44:02.135970  321785 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1119 02:44:02.146657  321785 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1119 02:44:02.146686  321785 kubeadm.go:602] duration metric: took 26.390987ms to restartPrimaryControlPlane
	I1119 02:44:02.146696  321785 kubeadm.go:403] duration metric: took 105.316163ms to StartCluster
	I1119 02:44:02.146711  321785 settings.go:142] acquiring lock: {Name:mk39bd61177f2f8808b442f427cccc3032092975 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:44:02.146780  321785 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21924-11126/kubeconfig
	I1119 02:44:02.148837  321785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/kubeconfig: {Name:mkb0d0ec188d51add3fa67ae624c9c11581068d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:44:02.149070  321785 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 02:44:02.149418  321785 config.go:182] Loaded profile config "default-k8s-diff-port-167150": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:44:02.149472  321785 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 02:44:02.149544  321785 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-167150"
	I1119 02:44:02.149562  321785 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-167150"
	W1119 02:44:02.149570  321785 addons.go:248] addon storage-provisioner should already be in state true
	I1119 02:44:02.149593  321785 host.go:66] Checking if "default-k8s-diff-port-167150" exists ...
	I1119 02:44:02.149660  321785 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-167150"
	I1119 02:44:02.149672  321785 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-167150"
	W1119 02:44:02.149680  321785 addons.go:248] addon dashboard should already be in state true
	I1119 02:44:02.149698  321785 host.go:66] Checking if "default-k8s-diff-port-167150" exists ...
	I1119 02:44:02.150160  321785 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-167150 --format={{.State.Status}}
	I1119 02:44:02.150222  321785 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-167150"
	I1119 02:44:02.150259  321785 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-167150"
	I1119 02:44:02.150645  321785 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-167150 --format={{.State.Status}}
	I1119 02:44:02.150777  321785 out.go:179] * Verifying Kubernetes components...
	I1119 02:44:02.150805  321785 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-167150 --format={{.State.Status}}
	I1119 02:44:02.152129  321785 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:44:02.184485  321785 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 02:44:02.185093  321785 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-167150"
	W1119 02:44:02.185151  321785 addons.go:248] addon default-storageclass should already be in state true
	I1119 02:44:02.185191  321785 host.go:66] Checking if "default-k8s-diff-port-167150" exists ...
	I1119 02:44:02.185764  321785 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-167150 --format={{.State.Status}}
	I1119 02:44:02.185914  321785 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 02:44:02.185957  321785 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 02:44:02.186036  321785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-167150
	I1119 02:44:02.189009  321785 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1119 02:44:02.191269  321785 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1119 02:44:02.817414  317113 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-987573" is "Ready"
	I1119 02:44:02.817451  317113 pod_ready.go:86] duration metric: took 1.201562035s for pod "kube-scheduler-old-k8s-version-987573" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:02.817465  317113 pod_ready.go:40] duration metric: took 15.119572051s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 02:44:02.876738  317113 start.go:628] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1119 02:44:02.878768  317113 out.go:203] 
	W1119 02:44:02.879972  317113 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1119 02:44:02.881087  317113 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1119 02:44:02.882563  317113 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-987573" cluster and "default" namespace by default
	I1119 02:44:01.865997  322722 cli_runner.go:164] Run: docker network inspect no-preload-837474 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 02:44:01.895022  322722 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1119 02:44:01.901043  322722 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 02:44:01.915792  322722 kubeadm.go:884] updating cluster {Name:no-preload-837474 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-837474 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 02:44:01.915927  322722 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 02:44:01.915971  322722 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 02:44:01.967410  322722 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 02:44:01.967487  322722 cache_images.go:86] Images are preloaded, skipping loading
	I1119 02:44:01.967497  322722 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1119 02:44:01.967631  322722 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-837474 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-837474 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 02:44:01.967712  322722 ssh_runner.go:195] Run: crio config
	I1119 02:44:02.043481  322722 cni.go:84] Creating CNI manager for ""
	I1119 02:44:02.043503  322722 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 02:44:02.043523  322722 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 02:44:02.043551  322722 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-837474 NodeName:no-preload-837474 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 02:44:02.045134  322722 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-837474"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 02:44:02.045275  322722 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 02:44:02.060354  322722 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 02:44:02.060444  322722 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 02:44:02.072326  322722 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1119 02:44:02.096556  322722 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 02:44:02.115337  322722 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1119 02:44:02.133940  322722 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1119 02:44:02.138402  322722 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 02:44:02.155621  322722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:44:02.333032  322722 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 02:44:02.357454  322722 certs.go:69] Setting up /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474 for IP: 192.168.103.2
	I1119 02:44:02.357470  322722 certs.go:195] generating shared ca certs ...
	I1119 02:44:02.357491  322722 certs.go:227] acquiring lock for ca certs: {Name:mkfa94aafd627ee4c1a185e05aa520339d3c22d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:44:02.357768  322722 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21924-11126/.minikube/ca.key
	I1119 02:44:02.357850  322722 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21924-11126/.minikube/proxy-client-ca.key
	I1119 02:44:02.357893  322722 certs.go:257] generating profile certs ...
	I1119 02:44:02.358014  322722 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/client.key
	I1119 02:44:02.358084  322722 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/apiserver.key.2f093449
	I1119 02:44:02.358146  322722 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/proxy-client.key
	I1119 02:44:02.358282  322722 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/14634.pem (1338 bytes)
	W1119 02:44:02.358316  322722 certs.go:480] ignoring /home/jenkins/minikube-integration/21924-11126/.minikube/certs/14634_empty.pem, impossibly tiny 0 bytes
	I1119 02:44:02.358325  322722 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca-key.pem (1671 bytes)
	I1119 02:44:02.358359  322722 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem (1082 bytes)
	I1119 02:44:02.358386  322722 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/cert.pem (1123 bytes)
	I1119 02:44:02.358411  322722 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/key.pem (1675 bytes)
	I1119 02:44:02.358485  322722 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/files/etc/ssl/certs/146342.pem (1708 bytes)
	I1119 02:44:02.359220  322722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 02:44:02.401077  322722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1119 02:44:02.432404  322722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 02:44:02.465534  322722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1119 02:44:02.513238  322722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1119 02:44:02.542343  322722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1671 bytes)
	I1119 02:44:02.568294  322722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 02:44:02.592094  322722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/no-preload-837474/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1119 02:44:02.616831  322722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/files/etc/ssl/certs/146342.pem --> /usr/share/ca-certificates/146342.pem (1708 bytes)
	I1119 02:44:02.645202  322722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 02:44:02.671201  322722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/certs/14634.pem --> /usr/share/ca-certificates/14634.pem (1338 bytes)
	I1119 02:44:02.696172  322722 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 02:44:02.713270  322722 ssh_runner.go:195] Run: openssl version
	I1119 02:44:02.721455  322722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/146342.pem && ln -fs /usr/share/ca-certificates/146342.pem /etc/ssl/certs/146342.pem"
	I1119 02:44:02.732989  322722 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/146342.pem
	I1119 02:44:02.737532  322722 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 02:02 /usr/share/ca-certificates/146342.pem
	I1119 02:44:02.737594  322722 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/146342.pem
	I1119 02:44:02.787811  322722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/146342.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 02:44:02.797898  322722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 02:44:02.808647  322722 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:44:02.813294  322722 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 01:56 /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:44:02.813363  322722 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:44:02.874056  322722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 02:44:02.885090  322722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14634.pem && ln -fs /usr/share/ca-certificates/14634.pem /etc/ssl/certs/14634.pem"
	I1119 02:44:02.896167  322722 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14634.pem
	I1119 02:44:02.901774  322722 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 02:02 /usr/share/ca-certificates/14634.pem
	I1119 02:44:02.901827  322722 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14634.pem
	I1119 02:44:02.956405  322722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14634.pem /etc/ssl/certs/51391683.0"
	I1119 02:44:02.967926  322722 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 02:44:02.974058  322722 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1119 02:44:03.043012  322722 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1119 02:44:03.127725  322722 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1119 02:44:03.201611  322722 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1119 02:44:03.270538  322722 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1119 02:44:03.351153  322722 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1119 02:44:03.430088  322722 kubeadm.go:401] StartCluster: {Name:no-preload-837474 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-837474 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:44:03.430191  322722 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 02:44:03.430241  322722 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 02:44:03.494270  322722 cri.go:89] found id: "6dd757954ee069960d7775b0cb8053165f8ed7b87e78e24e092a5d8d6ad8c518"
	I1119 02:44:03.494297  322722 cri.go:89] found id: "348a7baf54addbf4a9c81030950fa886111d02619363237a83c83efe031b6e4e"
	I1119 02:44:03.494303  322722 cri.go:89] found id: "e25eec2afaa5d216ff068aae46bf36572a21229c3f7eba57128ac16e1b16a13a"
	I1119 02:44:03.494307  322722 cri.go:89] found id: "70ad4cd08b245fb372615d7c559ce529ef762f5d44fc541f9bc7000ebd69b651"
	I1119 02:44:03.494311  322722 cri.go:89] found id: ""
	I1119 02:44:03.494359  322722 ssh_runner.go:195] Run: sudo runc list -f json
	W1119 02:44:03.517537  322722 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:44:03Z" level=error msg="open /run/runc: no such file or directory"
	I1119 02:44:03.517606  322722 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 02:44:03.540614  322722 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1119 02:44:03.540637  322722 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1119 02:44:03.540697  322722 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1119 02:44:03.555394  322722 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1119 02:44:03.556662  322722 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-837474" does not appear in /home/jenkins/minikube-integration/21924-11126/kubeconfig
	I1119 02:44:03.557598  322722 kubeconfig.go:62] /home/jenkins/minikube-integration/21924-11126/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-837474" cluster setting kubeconfig missing "no-preload-837474" context setting]
	I1119 02:44:03.559112  322722 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/kubeconfig: {Name:mkb0d0ec188d51add3fa67ae624c9c11581068d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:44:03.562008  322722 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1119 02:44:03.579055  322722 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1119 02:44:03.579085  322722 kubeadm.go:602] duration metric: took 38.44141ms to restartPrimaryControlPlane
	I1119 02:44:03.579095  322722 kubeadm.go:403] duration metric: took 149.013327ms to StartCluster
	I1119 02:44:03.579111  322722 settings.go:142] acquiring lock: {Name:mk39bd61177f2f8808b442f427cccc3032092975 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:44:03.579177  322722 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21924-11126/kubeconfig
	I1119 02:44:03.581637  322722 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/kubeconfig: {Name:mkb0d0ec188d51add3fa67ae624c9c11581068d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:44:03.581917  322722 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 02:44:03.582141  322722 config.go:182] Loaded profile config "no-preload-837474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:44:03.582191  322722 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 02:44:03.582285  322722 addons.go:70] Setting storage-provisioner=true in profile "no-preload-837474"
	I1119 02:44:03.582308  322722 addons.go:239] Setting addon storage-provisioner=true in "no-preload-837474"
	W1119 02:44:03.582316  322722 addons.go:248] addon storage-provisioner should already be in state true
	I1119 02:44:03.582343  322722 host.go:66] Checking if "no-preload-837474" exists ...
	I1119 02:44:03.582867  322722 cli_runner.go:164] Run: docker container inspect no-preload-837474 --format={{.State.Status}}
	I1119 02:44:03.583022  322722 addons.go:70] Setting dashboard=true in profile "no-preload-837474"
	I1119 02:44:03.583041  322722 addons.go:239] Setting addon dashboard=true in "no-preload-837474"
	W1119 02:44:03.583048  322722 addons.go:248] addon dashboard should already be in state true
	I1119 02:44:03.583071  322722 host.go:66] Checking if "no-preload-837474" exists ...
	I1119 02:44:03.583518  322722 cli_runner.go:164] Run: docker container inspect no-preload-837474 --format={{.State.Status}}
	I1119 02:44:03.583680  322722 addons.go:70] Setting default-storageclass=true in profile "no-preload-837474"
	I1119 02:44:03.583700  322722 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-837474"
	I1119 02:44:03.584021  322722 cli_runner.go:164] Run: docker container inspect no-preload-837474 --format={{.State.Status}}
	I1119 02:44:03.587057  322722 out.go:179] * Verifying Kubernetes components...
	I1119 02:44:03.591976  322722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:44:03.619160  322722 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1119 02:44:03.620527  322722 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1119 02:44:03.621590  322722 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1119 02:44:03.621682  322722 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1119 02:44:03.621818  322722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-837474
	I1119 02:44:03.626314  322722 addons.go:239] Setting addon default-storageclass=true in "no-preload-837474"
	W1119 02:44:03.626513  322722 addons.go:248] addon default-storageclass should already be in state true
	I1119 02:44:03.626653  322722 host.go:66] Checking if "no-preload-837474" exists ...
	I1119 02:44:03.627233  322722 cli_runner.go:164] Run: docker container inspect no-preload-837474 --format={{.State.Status}}
	I1119 02:44:03.628068  322722 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 02:43:59.590799  320707 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1119 02:43:59.596406  320707 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1119 02:43:59.597504  320707 api_server.go:141] control plane version: v1.34.1
	I1119 02:43:59.597525  320707 api_server.go:131] duration metric: took 1.007755817s to wait for apiserver health ...
	I1119 02:43:59.597534  320707 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 02:43:59.601152  320707 system_pods.go:59] 8 kube-system pods found
	I1119 02:43:59.601193  320707 system_pods.go:61] "coredns-66bc5c9577-6zqr2" [45763e00-8d07-4cd1-bc77-8131988ad187] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:43:59.601203  320707 system_pods.go:61] "etcd-embed-certs-811173" [aa91bc11-b985-43ed-bb19-226f47adb517] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1119 02:43:59.601217  320707 system_pods.go:61] "kindnet-b2w9g" [0c0429a0-c37c-4eae-befb-d496610e882c] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1119 02:43:59.601225  320707 system_pods.go:61] "kube-apiserver-embed-certs-811173" [85c9fc14-94db-4732-ad9c-53fdb27b0bb5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1119 02:43:59.601236  320707 system_pods.go:61] "kube-controller-manager-embed-certs-811173" [9944e561-0ab9-496d-baac-8b99bf3d6149] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1119 02:43:59.601244  320707 system_pods.go:61] "kube-proxy-s5bzz" [cebbac1b-ff7a-4bdf-b337-ec0b3b320728] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1119 02:43:59.601251  320707 system_pods.go:61] "kube-scheduler-embed-certs-811173" [6c1f1974-3341-47e3-875d-e5ec0abd032c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1119 02:43:59.601262  320707 system_pods.go:61] "storage-provisioner" [4b41d056-28d4-4b4a-b546-2fb8c76fe688] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 02:43:59.601270  320707 system_pods.go:74] duration metric: took 3.728972ms to wait for pod list to return data ...
	I1119 02:43:59.601283  320707 default_sa.go:34] waiting for default service account to be created ...
	I1119 02:43:59.603485  320707 default_sa.go:45] found service account: "default"
	I1119 02:43:59.603506  320707 default_sa.go:55] duration metric: took 2.216158ms for default service account to be created ...
	I1119 02:43:59.603515  320707 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 02:43:59.605857  320707 system_pods.go:86] 8 kube-system pods found
	I1119 02:43:59.605883  320707 system_pods.go:89] "coredns-66bc5c9577-6zqr2" [45763e00-8d07-4cd1-bc77-8131988ad187] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:43:59.605890  320707 system_pods.go:89] "etcd-embed-certs-811173" [aa91bc11-b985-43ed-bb19-226f47adb517] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1119 02:43:59.605899  320707 system_pods.go:89] "kindnet-b2w9g" [0c0429a0-c37c-4eae-befb-d496610e882c] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1119 02:43:59.605905  320707 system_pods.go:89] "kube-apiserver-embed-certs-811173" [85c9fc14-94db-4732-ad9c-53fdb27b0bb5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1119 02:43:59.605913  320707 system_pods.go:89] "kube-controller-manager-embed-certs-811173" [9944e561-0ab9-496d-baac-8b99bf3d6149] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1119 02:43:59.605919  320707 system_pods.go:89] "kube-proxy-s5bzz" [cebbac1b-ff7a-4bdf-b337-ec0b3b320728] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1119 02:43:59.605927  320707 system_pods.go:89] "kube-scheduler-embed-certs-811173" [6c1f1974-3341-47e3-875d-e5ec0abd032c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1119 02:43:59.605932  320707 system_pods.go:89] "storage-provisioner" [4b41d056-28d4-4b4a-b546-2fb8c76fe688] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 02:43:59.605938  320707 system_pods.go:126] duration metric: took 2.416792ms to wait for k8s-apps to be running ...
	I1119 02:43:59.605946  320707 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 02:43:59.605979  320707 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 02:43:59.619349  320707 system_svc.go:56] duration metric: took 13.395645ms WaitForService to wait for kubelet
	I1119 02:43:59.619375  320707 kubeadm.go:587] duration metric: took 3.040077146s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 02:43:59.619410  320707 node_conditions.go:102] verifying NodePressure condition ...
	I1119 02:43:59.621869  320707 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1119 02:43:59.621891  320707 node_conditions.go:123] node cpu capacity is 8
	I1119 02:43:59.621903  320707 node_conditions.go:105] duration metric: took 2.488454ms to run NodePressure ...
	I1119 02:43:59.621915  320707 start.go:242] waiting for startup goroutines ...
	I1119 02:43:59.621927  320707 start.go:247] waiting for cluster config update ...
	I1119 02:43:59.621944  320707 start.go:256] writing updated cluster config ...
	I1119 02:43:59.622252  320707 ssh_runner.go:195] Run: rm -f paused
	I1119 02:43:59.625986  320707 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 02:43:59.629194  320707 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-6zqr2" in "kube-system" namespace to be "Ready" or be gone ...
	W1119 02:44:01.634839  320707 pod_ready.go:104] pod "coredns-66bc5c9577-6zqr2" is not "Ready", error: <nil>
	W1119 02:44:03.648649  320707 pod_ready.go:104] pod "coredns-66bc5c9577-6zqr2" is not "Ready", error: <nil>
	I1119 02:44:02.192264  321785 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1119 02:44:02.192303  321785 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1119 02:44:02.192351  321785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-167150
	I1119 02:44:02.228424  321785 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 02:44:02.228512  321785 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 02:44:02.228576  321785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-167150
	I1119 02:44:02.229874  321785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/default-k8s-diff-port-167150/id_rsa Username:docker}
	I1119 02:44:02.242125  321785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/default-k8s-diff-port-167150/id_rsa Username:docker}
	I1119 02:44:02.258817  321785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/default-k8s-diff-port-167150/id_rsa Username:docker}
	I1119 02:44:02.350466  321785 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 02:44:02.369604  321785 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 02:44:02.373678  321785 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-167150" to be "Ready" ...
	I1119 02:44:02.388816  321785 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1119 02:44:02.388837  321785 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1119 02:44:02.411020  321785 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 02:44:02.424225  321785 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1119 02:44:02.424248  321785 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1119 02:44:02.447469  321785 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1119 02:44:02.447552  321785 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1119 02:44:02.487961  321785 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1119 02:44:02.487999  321785 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1119 02:44:02.527470  321785 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1119 02:44:02.527493  321785 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1119 02:44:02.551037  321785 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1119 02:44:02.551088  321785 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1119 02:44:02.570910  321785 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1119 02:44:02.570931  321785 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1119 02:44:02.593351  321785 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1119 02:44:02.593378  321785 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1119 02:44:02.609670  321785 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1119 02:44:02.609707  321785 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1119 02:44:02.628840  321785 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1119 02:44:03.629220  322722 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 02:44:03.629285  322722 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 02:44:03.629333  322722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-837474
	I1119 02:44:03.654971  322722 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 02:44:03.655112  322722 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 02:44:03.655206  322722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-837474
	I1119 02:44:03.665465  322722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/no-preload-837474/id_rsa Username:docker}
	I1119 02:44:03.688637  322722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/no-preload-837474/id_rsa Username:docker}
	I1119 02:44:03.699429  322722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/no-preload-837474/id_rsa Username:docker}
	I1119 02:44:03.884610  322722 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 02:44:03.901340  322722 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1119 02:44:03.901363  322722 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1119 02:44:03.904000  322722 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 02:44:03.930040  322722 node_ready.go:35] waiting up to 6m0s for node "no-preload-837474" to be "Ready" ...
	I1119 02:44:03.932330  322722 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 02:44:03.952140  322722 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1119 02:44:03.952215  322722 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1119 02:44:04.028462  322722 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1119 02:44:04.028497  322722 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1119 02:44:04.085157  322722 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1119 02:44:04.085199  322722 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1119 02:44:04.125412  322722 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1119 02:44:04.125519  322722 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1119 02:44:04.160779  322722 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1119 02:44:04.160804  322722 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1119 02:44:04.191114  322722 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1119 02:44:04.191138  322722 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1119 02:44:04.220706  322722 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1119 02:44:04.220794  322722 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1119 02:44:04.258403  322722 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1119 02:44:04.258507  322722 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1119 02:44:04.305765  322722 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1119 02:44:05.129790  321785 node_ready.go:49] node "default-k8s-diff-port-167150" is "Ready"
	I1119 02:44:05.129822  321785 node_ready.go:38] duration metric: took 2.756116341s for node "default-k8s-diff-port-167150" to be "Ready" ...
	I1119 02:44:05.129837  321785 api_server.go:52] waiting for apiserver process to appear ...
	I1119 02:44:05.129885  321785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 02:44:06.256339  321785 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.886696988s)
	I1119 02:44:06.256572  321785 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.627678683s)
	I1119 02:44:06.256753  321785 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.845352931s)
	I1119 02:44:06.256787  321785 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.126891057s)
	I1119 02:44:06.257139  321785 api_server.go:72] duration metric: took 4.108035883s to wait for apiserver process to appear ...
	I1119 02:44:06.257150  321785 api_server.go:88] waiting for apiserver healthz status ...
	I1119 02:44:06.257168  321785 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I1119 02:44:06.259066  321785 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-167150 addons enable metrics-server
	
	I1119 02:44:06.263303  321785 api_server.go:279] https://192.168.94.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 02:44:06.263364  321785 api_server.go:103] status: https://192.168.94.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 02:44:06.265950  321785 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1119 02:44:06.580020  322722 node_ready.go:49] node "no-preload-837474" is "Ready"
	I1119 02:44:06.580054  322722 node_ready.go:38] duration metric: took 2.649979229s for node "no-preload-837474" to be "Ready" ...
	I1119 02:44:06.580071  322722 api_server.go:52] waiting for apiserver process to appear ...
	I1119 02:44:06.580123  322722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 02:44:07.361047  322722 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.457013506s)
	I1119 02:44:07.361139  322722 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.428630966s)
	I1119 02:44:07.361483  322722 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.055682339s)
	I1119 02:44:07.361653  322722 api_server.go:72] duration metric: took 3.779702096s to wait for apiserver process to appear ...
	I1119 02:44:07.361664  322722 api_server.go:88] waiting for apiserver healthz status ...
	I1119 02:44:07.361683  322722 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1119 02:44:07.363457  322722 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-837474 addons enable metrics-server
	
	I1119 02:44:07.368960  322722 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1119 02:44:05.648870  320707 pod_ready.go:104] pod "coredns-66bc5c9577-6zqr2" is not "Ready", error: <nil>
	W1119 02:44:08.137496  320707 pod_ready.go:104] pod "coredns-66bc5c9577-6zqr2" is not "Ready", error: <nil>
	I1119 02:44:06.267236  321785 addons.go:515] duration metric: took 4.117758775s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1119 02:44:06.757293  321785 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I1119 02:44:06.765820  321785 api_server.go:279] https://192.168.94.2:8444/healthz returned 200:
	ok
	I1119 02:44:06.768145  321785 api_server.go:141] control plane version: v1.34.1
	I1119 02:44:06.768228  321785 api_server.go:131] duration metric: took 511.070321ms to wait for apiserver health ...
	I1119 02:44:06.768253  321785 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 02:44:06.772856  321785 system_pods.go:59] 8 kube-system pods found
	I1119 02:44:06.772896  321785 system_pods.go:61] "coredns-66bc5c9577-bht2q" [67eaa46f-0f14-47fe-b518-8fc2339ac090] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:44:06.772908  321785 system_pods.go:61] "etcd-default-k8s-diff-port-167150" [ac29d08c-2178-4113-8fe6-ea4363113e84] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1119 02:44:06.772916  321785 system_pods.go:61] "kindnet-rs6jh" [05ae880f-e69c-4513-b3ab-f76b85c4ac98] Running
	I1119 02:44:06.772925  321785 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-167150" [4d716863-a958-4aa4-ac71-8630c57c1676] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1119 02:44:06.772935  321785 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-167150" [5137eb7b-c71f-43be-a32f-908e744cb6c5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1119 02:44:06.772942  321785 system_pods.go:61] "kube-proxy-8gl4n" [33cee4c4-dbb5-4bc2-becb-ef2654e266b0] Running
	I1119 02:44:06.772950  321785 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-167150" [abe76fdb-41ae-498d-93bd-05734e7bdc8a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1119 02:44:06.772955  321785 system_pods.go:61] "storage-provisioner" [03ff5a52-b9d1-454f-ab4c-ca75268b32ef] Running
	I1119 02:44:06.772963  321785 system_pods.go:74] duration metric: took 4.692396ms to wait for pod list to return data ...
	I1119 02:44:06.772972  321785 default_sa.go:34] waiting for default service account to be created ...
	I1119 02:44:06.775941  321785 default_sa.go:45] found service account: "default"
	I1119 02:44:06.775962  321785 default_sa.go:55] duration metric: took 2.983868ms for default service account to be created ...
	I1119 02:44:06.775973  321785 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 02:44:06.779427  321785 system_pods.go:86] 8 kube-system pods found
	I1119 02:44:06.779492  321785 system_pods.go:89] "coredns-66bc5c9577-bht2q" [67eaa46f-0f14-47fe-b518-8fc2339ac090] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:44:06.779504  321785 system_pods.go:89] "etcd-default-k8s-diff-port-167150" [ac29d08c-2178-4113-8fe6-ea4363113e84] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1119 02:44:06.779511  321785 system_pods.go:89] "kindnet-rs6jh" [05ae880f-e69c-4513-b3ab-f76b85c4ac98] Running
	I1119 02:44:06.779520  321785 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-167150" [4d716863-a958-4aa4-ac71-8630c57c1676] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1119 02:44:06.779531  321785 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-167150" [5137eb7b-c71f-43be-a32f-908e744cb6c5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1119 02:44:06.779537  321785 system_pods.go:89] "kube-proxy-8gl4n" [33cee4c4-dbb5-4bc2-becb-ef2654e266b0] Running
	I1119 02:44:06.779549  321785 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-167150" [abe76fdb-41ae-498d-93bd-05734e7bdc8a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1119 02:44:06.779555  321785 system_pods.go:89] "storage-provisioner" [03ff5a52-b9d1-454f-ab4c-ca75268b32ef] Running
	I1119 02:44:06.779564  321785 system_pods.go:126] duration metric: took 3.584283ms to wait for k8s-apps to be running ...
	I1119 02:44:06.779578  321785 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 02:44:06.779625  321785 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 02:44:06.799756  321785 system_svc.go:56] duration metric: took 20.168188ms WaitForService to wait for kubelet
	I1119 02:44:06.799789  321785 kubeadm.go:587] duration metric: took 4.650687088s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 02:44:06.799820  321785 node_conditions.go:102] verifying NodePressure condition ...
	I1119 02:44:06.806693  321785 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1119 02:44:06.806725  321785 node_conditions.go:123] node cpu capacity is 8
	I1119 02:44:06.806741  321785 node_conditions.go:105] duration metric: took 6.915023ms to run NodePressure ...
	I1119 02:44:06.806756  321785 start.go:242] waiting for startup goroutines ...
	I1119 02:44:06.806765  321785 start.go:247] waiting for cluster config update ...
	I1119 02:44:06.806780  321785 start.go:256] writing updated cluster config ...
	I1119 02:44:06.807093  321785 ssh_runner.go:195] Run: rm -f paused
	I1119 02:44:06.812498  321785 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 02:44:06.819257  321785 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-bht2q" in "kube-system" namespace to be "Ready" or be gone ...
	W1119 02:44:08.826304  321785 pod_ready.go:104] pod "coredns-66bc5c9577-bht2q" is not "Ready", error: <nil>
	I1119 02:44:07.369797  322722 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 02:44:07.369828  322722 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 02:44:07.370339  322722 addons.go:515] duration metric: took 3.788144041s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1119 02:44:07.862007  322722 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1119 02:44:07.869064  322722 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 02:44:07.869276  322722 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 02:44:08.362770  322722 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1119 02:44:08.368557  322722 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1119 02:44:08.369831  322722 api_server.go:141] control plane version: v1.34.1
	I1119 02:44:08.369859  322722 api_server.go:131] duration metric: took 1.008187663s to wait for apiserver health ...
	I1119 02:44:08.369870  322722 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 02:44:08.375029  322722 system_pods.go:59] 8 kube-system pods found
	I1119 02:44:08.375075  322722 system_pods.go:61] "coredns-66bc5c9577-44bdr" [9ad0000a-752a-4a18-a649-dd63b3e638d9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:44:08.375108  322722 system_pods.go:61] "etcd-no-preload-837474" [66ccbb50-4995-4789-9bcc-97834e6635a4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1119 02:44:08.375120  322722 system_pods.go:61] "kindnet-96d7l" [d8eb5197-7836-4ec2-9fe3-e6354983a150] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1119 02:44:08.375138  322722 system_pods.go:61] "kube-apiserver-no-preload-837474" [fa87d2c9-bf36-4d63-9093-91435507e9f9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1119 02:44:08.375147  322722 system_pods.go:61] "kube-controller-manager-no-preload-837474" [c56b763c-5153-431e-9d68-848077ed8eff] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1119 02:44:08.375155  322722 system_pods.go:61] "kube-proxy-hmxzk" [0cf4c9ca-e9e0-4bcc-8a93-40ff2d54df4b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1119 02:44:08.375198  322722 system_pods.go:61] "kube-scheduler-no-preload-837474" [0b5dd44b-f40f-4f7e-89e0-c67cc486a8d7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1119 02:44:08.375207  322722 system_pods.go:61] "storage-provisioner" [7b82e1eb-4a04-4145-8163-28073775b6ed] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 02:44:08.375213  322722 system_pods.go:74] duration metric: took 5.337805ms to wait for pod list to return data ...
	I1119 02:44:08.375226  322722 default_sa.go:34] waiting for default service account to be created ...
	I1119 02:44:08.378133  322722 default_sa.go:45] found service account: "default"
	I1119 02:44:08.378153  322722 default_sa.go:55] duration metric: took 2.920658ms for default service account to be created ...
	I1119 02:44:08.378163  322722 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 02:44:08.381245  322722 system_pods.go:86] 8 kube-system pods found
	I1119 02:44:08.381275  322722 system_pods.go:89] "coredns-66bc5c9577-44bdr" [9ad0000a-752a-4a18-a649-dd63b3e638d9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:44:08.381285  322722 system_pods.go:89] "etcd-no-preload-837474" [66ccbb50-4995-4789-9bcc-97834e6635a4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1119 02:44:08.381295  322722 system_pods.go:89] "kindnet-96d7l" [d8eb5197-7836-4ec2-9fe3-e6354983a150] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1119 02:44:08.381306  322722 system_pods.go:89] "kube-apiserver-no-preload-837474" [fa87d2c9-bf36-4d63-9093-91435507e9f9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1119 02:44:08.381318  322722 system_pods.go:89] "kube-controller-manager-no-preload-837474" [c56b763c-5153-431e-9d68-848077ed8eff] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1119 02:44:08.381327  322722 system_pods.go:89] "kube-proxy-hmxzk" [0cf4c9ca-e9e0-4bcc-8a93-40ff2d54df4b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1119 02:44:08.381337  322722 system_pods.go:89] "kube-scheduler-no-preload-837474" [0b5dd44b-f40f-4f7e-89e0-c67cc486a8d7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1119 02:44:08.381349  322722 system_pods.go:89] "storage-provisioner" [7b82e1eb-4a04-4145-8163-28073775b6ed] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 02:44:08.381356  322722 system_pods.go:126] duration metric: took 3.187892ms to wait for k8s-apps to be running ...
	I1119 02:44:08.381365  322722 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 02:44:08.381412  322722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 02:44:08.399828  322722 system_svc.go:56] duration metric: took 18.454262ms WaitForService to wait for kubelet
	I1119 02:44:08.399865  322722 kubeadm.go:587] duration metric: took 4.817907193s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 02:44:08.399888  322722 node_conditions.go:102] verifying NodePressure condition ...
	I1119 02:44:08.403901  322722 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1119 02:44:08.403929  322722 node_conditions.go:123] node cpu capacity is 8
	I1119 02:44:08.403944  322722 node_conditions.go:105] duration metric: took 4.04954ms to run NodePressure ...
	I1119 02:44:08.403957  322722 start.go:242] waiting for startup goroutines ...
	I1119 02:44:08.403967  322722 start.go:247] waiting for cluster config update ...
	I1119 02:44:08.403980  322722 start.go:256] writing updated cluster config ...
	I1119 02:44:08.404292  322722 ssh_runner.go:195] Run: rm -f paused
	I1119 02:44:08.409302  322722 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 02:44:08.414887  322722 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-44bdr" in "kube-system" namespace to be "Ready" or be gone ...
	W1119 02:44:10.421293  322722 pod_ready.go:104] pod "coredns-66bc5c9577-44bdr" is not "Ready", error: <nil>
	W1119 02:44:10.637123  320707 pod_ready.go:104] pod "coredns-66bc5c9577-6zqr2" is not "Ready", error: <nil>
	W1119 02:44:13.136638  320707 pod_ready.go:104] pod "coredns-66bc5c9577-6zqr2" is not "Ready", error: <nil>
	W1119 02:44:11.324360  321785 pod_ready.go:104] pod "coredns-66bc5c9577-bht2q" is not "Ready", error: <nil>
	W1119 02:44:13.326307  321785 pod_ready.go:104] pod "coredns-66bc5c9577-bht2q" is not "Ready", error: <nil>
	W1119 02:44:12.421604  322722 pod_ready.go:104] pod "coredns-66bc5c9577-44bdr" is not "Ready", error: <nil>
	W1119 02:44:14.422915  322722 pod_ready.go:104] pod "coredns-66bc5c9577-44bdr" is not "Ready", error: <nil>
	W1119 02:44:15.656120  320707 pod_ready.go:104] pod "coredns-66bc5c9577-6zqr2" is not "Ready", error: <nil>
	W1119 02:44:18.136265  320707 pod_ready.go:104] pod "coredns-66bc5c9577-6zqr2" is not "Ready", error: <nil>
	W1119 02:44:15.824839  321785 pod_ready.go:104] pod "coredns-66bc5c9577-bht2q" is not "Ready", error: <nil>
	W1119 02:44:17.825074  321785 pod_ready.go:104] pod "coredns-66bc5c9577-bht2q" is not "Ready", error: <nil>
	W1119 02:44:16.459334  322722 pod_ready.go:104] pod "coredns-66bc5c9577-44bdr" is not "Ready", error: <nil>
	W1119 02:44:18.920572  322722 pod_ready.go:104] pod "coredns-66bc5c9577-44bdr" is not "Ready", error: <nil>
	W1119 02:44:20.634359  320707 pod_ready.go:104] pod "coredns-66bc5c9577-6zqr2" is not "Ready", error: <nil>
	W1119 02:44:22.634632  320707 pod_ready.go:104] pod "coredns-66bc5c9577-6zqr2" is not "Ready", error: <nil>
	W1119 02:44:20.323956  321785 pod_ready.go:104] pod "coredns-66bc5c9577-bht2q" is not "Ready", error: <nil>
	W1119 02:44:22.324930  321785 pod_ready.go:104] pod "coredns-66bc5c9577-bht2q" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Nov 19 02:44:09 old-k8s-version-987573 crio[566]: time="2025-11-19T02:44:09.942850312Z" level=info msg="Created container 09c387d074a5772ce6728e2da9a5c2f5ec89c2549480300c3fd742b2ab4f32d0: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4jn6m/dashboard-metrics-scraper" id=99d1bf7f-e890-44c4-9b77-96015115f15d name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 02:44:09 old-k8s-version-987573 crio[566]: time="2025-11-19T02:44:09.943471073Z" level=info msg="Starting container: 09c387d074a5772ce6728e2da9a5c2f5ec89c2549480300c3fd742b2ab4f32d0" id=c3633666-7efd-423e-a7b5-0602063e76c5 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 02:44:09 old-k8s-version-987573 crio[566]: time="2025-11-19T02:44:09.945678998Z" level=info msg="Started container" PID=1731 containerID=09c387d074a5772ce6728e2da9a5c2f5ec89c2549480300c3fd742b2ab4f32d0 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4jn6m/dashboard-metrics-scraper id=c3633666-7efd-423e-a7b5-0602063e76c5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4712eb504baa8295a5393acd51be4ae787d7ec34ae40a05f03ea986ccd250eaa
	Nov 19 02:44:10 old-k8s-version-987573 crio[566]: time="2025-11-19T02:44:10.52760866Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=ddfb020d-e0cc-4bb7-88ff-11e59f724789 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 02:44:10 old-k8s-version-987573 crio[566]: time="2025-11-19T02:44:10.531283886Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=18f5ba74-9ab8-4d51-bf40-099a1e7fa6d2 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 02:44:10 old-k8s-version-987573 crio[566]: time="2025-11-19T02:44:10.535543116Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4jn6m/dashboard-metrics-scraper" id=60bc11de-9b24-4f2f-959b-f50653be2bbe name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 02:44:10 old-k8s-version-987573 crio[566]: time="2025-11-19T02:44:10.535847848Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:44:10 old-k8s-version-987573 crio[566]: time="2025-11-19T02:44:10.545738061Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:44:10 old-k8s-version-987573 crio[566]: time="2025-11-19T02:44:10.54641983Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:44:10 old-k8s-version-987573 crio[566]: time="2025-11-19T02:44:10.582107034Z" level=info msg="Created container c8e88b5f77554f0c0105232fbd1aa6d9713330da86439ef7081b285a8151c78e: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4jn6m/dashboard-metrics-scraper" id=60bc11de-9b24-4f2f-959b-f50653be2bbe name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 02:44:10 old-k8s-version-987573 crio[566]: time="2025-11-19T02:44:10.582802727Z" level=info msg="Starting container: c8e88b5f77554f0c0105232fbd1aa6d9713330da86439ef7081b285a8151c78e" id=bb7a49f2-66fd-41f8-97e3-8ad60d92a8c4 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 02:44:10 old-k8s-version-987573 crio[566]: time="2025-11-19T02:44:10.58518058Z" level=info msg="Started container" PID=1742 containerID=c8e88b5f77554f0c0105232fbd1aa6d9713330da86439ef7081b285a8151c78e description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4jn6m/dashboard-metrics-scraper id=bb7a49f2-66fd-41f8-97e3-8ad60d92a8c4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4712eb504baa8295a5393acd51be4ae787d7ec34ae40a05f03ea986ccd250eaa
	Nov 19 02:44:11 old-k8s-version-987573 crio[566]: time="2025-11-19T02:44:11.533671243Z" level=info msg="Removing container: 09c387d074a5772ce6728e2da9a5c2f5ec89c2549480300c3fd742b2ab4f32d0" id=50fe67d6-ef1d-4157-9866-2f8fe8689a8b name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 19 02:44:11 old-k8s-version-987573 crio[566]: time="2025-11-19T02:44:11.544535919Z" level=info msg="Removed container 09c387d074a5772ce6728e2da9a5c2f5ec89c2549480300c3fd742b2ab4f32d0: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4jn6m/dashboard-metrics-scraper" id=50fe67d6-ef1d-4157-9866-2f8fe8689a8b name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 19 02:44:17 old-k8s-version-987573 crio[566]: time="2025-11-19T02:44:17.550684926Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=91cd4a17-1c4d-4db1-8279-699e1b19e3f7 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 02:44:17 old-k8s-version-987573 crio[566]: time="2025-11-19T02:44:17.551667041Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=27f39c23-e440-4c51-9daf-0eae1046f23b name=/runtime.v1.ImageService/ImageStatus
	Nov 19 02:44:17 old-k8s-version-987573 crio[566]: time="2025-11-19T02:44:17.552842299Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=3f94c313-40c7-46c1-8f46-24f0c24eba55 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 02:44:17 old-k8s-version-987573 crio[566]: time="2025-11-19T02:44:17.552974512Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:44:17 old-k8s-version-987573 crio[566]: time="2025-11-19T02:44:17.559892609Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:44:17 old-k8s-version-987573 crio[566]: time="2025-11-19T02:44:17.560074759Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/d299ab7a83f816fb830e8cca9867be86ab2e3ad0cf4e7b8a8124bb0648488a7f/merged/etc/passwd: no such file or directory"
	Nov 19 02:44:17 old-k8s-version-987573 crio[566]: time="2025-11-19T02:44:17.560110239Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/d299ab7a83f816fb830e8cca9867be86ab2e3ad0cf4e7b8a8124bb0648488a7f/merged/etc/group: no such file or directory"
	Nov 19 02:44:17 old-k8s-version-987573 crio[566]: time="2025-11-19T02:44:17.560405273Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:44:17 old-k8s-version-987573 crio[566]: time="2025-11-19T02:44:17.588414705Z" level=info msg="Created container 2077593bc532cc1d14a90e1072ea50812cb23766a988f2cc7d1e6b8f14c3b0ee: kube-system/storage-provisioner/storage-provisioner" id=3f94c313-40c7-46c1-8f46-24f0c24eba55 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 02:44:17 old-k8s-version-987573 crio[566]: time="2025-11-19T02:44:17.589009111Z" level=info msg="Starting container: 2077593bc532cc1d14a90e1072ea50812cb23766a988f2cc7d1e6b8f14c3b0ee" id=632988fd-6a65-44e5-affc-3312f0f4b4f8 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 02:44:17 old-k8s-version-987573 crio[566]: time="2025-11-19T02:44:17.591202411Z" level=info msg="Started container" PID=1757 containerID=2077593bc532cc1d14a90e1072ea50812cb23766a988f2cc7d1e6b8f14c3b0ee description=kube-system/storage-provisioner/storage-provisioner id=632988fd-6a65-44e5-affc-3312f0f4b4f8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=48171d9453f2b4a1efe0b015f63dac5b61dd498fdddb79e04903aae4733578dc
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	2077593bc532c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           8 seconds ago       Running             storage-provisioner         1                   48171d9453f2b       storage-provisioner                              kube-system
	c8e88b5f77554       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           15 seconds ago      Exited              dashboard-metrics-scraper   1                   4712eb504baa8       dashboard-metrics-scraper-5f989dc9cf-4jn6m       kubernetes-dashboard
	a0016258b0d08       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   19 seconds ago      Running             kubernetes-dashboard        0                   af7f54ad0c3da       kubernetes-dashboard-8694d4445c-mshqj            kubernetes-dashboard
	0be6a37a59224       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           35 seconds ago      Running             coredns                     0                   29266351cd326       coredns-5dd5756b68-djd8r                         kube-system
	f499f6a025ea2       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           35 seconds ago      Running             busybox                     1                   58fccff701069       busybox                                          default
	9838e80b4a113       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           39 seconds ago      Exited              storage-provisioner         0                   48171d9453f2b       storage-provisioner                              kube-system
	de11ec22f706c       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           39 seconds ago      Running             kube-proxy                  0                   eac27cfb28d5c       kube-proxy-tmqhk                                 kube-system
	5ade0b93851f2       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           39 seconds ago      Running             kindnet-cni                 0                   a872c47f30de9       kindnet-57t4v                                    kube-system
	698573dc69a8a       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           41 seconds ago      Running             kube-controller-manager     0                   6c384a9b3e8a5       kube-controller-manager-old-k8s-version-987573   kube-system
	0b21b4a61c9e3       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           41 seconds ago      Running             kube-scheduler              0                   2398c1d185249       kube-scheduler-old-k8s-version-987573            kube-system
	52e10aa72ed87       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           41 seconds ago      Running             kube-apiserver              0                   6751465260e59       kube-apiserver-old-k8s-version-987573            kube-system
	a3a95c851a1b1       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           41 seconds ago      Running             etcd                        0                   0e512b64a736f       etcd-old-k8s-version-987573                      kube-system
	
	
	==> coredns [0be6a37a592243280ea5c142186391f3f2f26b568b8a07102398749bd16bb41a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:56034 - 51200 "HINFO IN 6212132219970849905.7332513038716120323. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.457041818s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-987573
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-987573
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ebd49efe8b7d44f89fc96e21beffcd64dfee8277
	                    minikube.k8s.io/name=old-k8s-version-987573
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T02_42_42_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 02:42:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-987573
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 02:44:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 02:44:17 +0000   Wed, 19 Nov 2025 02:42:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 02:44:17 +0000   Wed, 19 Nov 2025 02:42:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 02:44:17 +0000   Wed, 19 Nov 2025 02:42:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 02:44:17 +0000   Wed, 19 Nov 2025 02:43:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-987573
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863340Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863340Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf10fb2f940d419c1d138723691cfee8
	  System UUID:                7b61050c-e4d6-47f6-aa9c-d45cf03b4e83
	  Boot ID:                    b74e7f3d-91af-4c1b-82f6-1745dfd2e678
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         74s
	  kube-system                 coredns-5dd5756b68-djd8r                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     91s
	  kube-system                 etcd-old-k8s-version-987573                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         104s
	  kube-system                 kindnet-57t4v                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      91s
	  kube-system                 kube-apiserver-old-k8s-version-987573             250m (3%)     0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 kube-controller-manager-old-k8s-version-987573    200m (2%)     0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 kube-proxy-tmqhk                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 kube-scheduler-old-k8s-version-987573             100m (1%)     0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-4jn6m        0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-mshqj             0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 90s                kube-proxy       
	  Normal  Starting                 39s                kube-proxy       
	  Normal  NodeHasSufficientMemory  104s               kubelet          Node old-k8s-version-987573 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    104s               kubelet          Node old-k8s-version-987573 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     104s               kubelet          Node old-k8s-version-987573 status is now: NodeHasSufficientPID
	  Normal  Starting                 104s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           91s                node-controller  Node old-k8s-version-987573 event: Registered Node old-k8s-version-987573 in Controller
	  Normal  NodeReady                77s                kubelet          Node old-k8s-version-987573 status is now: NodeReady
	  Normal  Starting                 42s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  42s (x8 over 42s)  kubelet          Node old-k8s-version-987573 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    42s (x8 over 42s)  kubelet          Node old-k8s-version-987573 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     42s (x8 over 42s)  kubelet          Node old-k8s-version-987573 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           26s                node-controller  Node old-k8s-version-987573 event: Registered Node old-k8s-version-987573 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: c6 2a 10 38 98 ce 7e af 25 93 50 8b 08 00
	[Nov19 02:40] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 69 19 13 d2 34 08 06
	[  +0.000303] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff de 82 c7 57 ef 49 08 06
	[Nov19 02:41] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 82 b6 eb cd 7e a7 08 06
	[  +0.001170] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 6a 20 a4 3b 82 10 08 06
	[ +12.842438] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 1e 35 97 65 5e 2e 08 06
	[  +4.187285] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ba 20 d4 3c 25 0c 08 06
	[ +19.742639] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6e e8 d1 08 45 d2 08 06
	[  +0.000327] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ba 20 d4 3c 25 0c 08 06
	[Nov19 02:42] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 2b 58 8a 05 dc 08 06
	[  +0.000340] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 b6 eb cd 7e a7 08 06
	[ +10.661146] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b6 1d bb 8d c6 48 08 06
	[  +0.000382] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 1e 35 97 65 5e 2e 08 06
	
	
	==> etcd [a3a95c851a1b1a7b23770436d155ba0f868406c9e5408bb1d6b801e15b851212] <==
	{"level":"info","ts":"2025-11-19T02:43:43.985447Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-11-19T02:43:43.985545Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-11-19T02:43:43.985657Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-19T02:43:43.985687Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-19T02:43:43.988883Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-19T02:43:43.989091Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-19T02:43:43.989102Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-19T02:43:43.989143Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-19T02:43:43.989152Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-19T02:43:45.277648Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-19T02:43:45.277689Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-19T02:43:45.277711Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-11-19T02:43:45.277725Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-11-19T02:43:45.277731Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-11-19T02:43:45.277739Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-11-19T02:43:45.277745Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-11-19T02:43:45.279139Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-987573 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-19T02:43:45.279141Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-19T02:43:45.279158Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-19T02:43:45.279855Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-19T02:43:45.279956Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-19T02:43:45.281182Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-11-19T02:43:45.281359Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-19T02:44:07.676479Z","caller":"traceutil/trace.go:171","msg":"trace[484018559] transaction","detail":"{read_only:false; response_revision:580; number_of_response:1; }","duration":"129.241126ms","start":"2025-11-19T02:44:07.547179Z","end":"2025-11-19T02:44:07.676421Z","steps":["trace[484018559] 'process raft request'  (duration: 128.852937ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T02:44:07.676944Z","caller":"traceutil/trace.go:171","msg":"trace[159111979] transaction","detail":"{read_only:false; response_revision:581; number_of_response:1; }","duration":"129.540798ms","start":"2025-11-19T02:44:07.547253Z","end":"2025-11-19T02:44:07.676794Z","steps":["trace[159111979] 'process raft request'  (duration: 129.004708ms)"],"step_count":1}
	
	
	==> kernel <==
	 02:44:25 up  1:26,  0 user,  load average: 4.34, 3.48, 2.31
	Linux old-k8s-version-987573 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5ade0b93851f29eec6e4a88852e33c753160da8ea44034a1ae4d3403b4213d7b] <==
	I1119 02:43:46.948216       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 02:43:46.948419       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1119 02:43:47.041721       1 main.go:148] setting mtu 1500 for CNI 
	I1119 02:43:47.041752       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 02:43:47.041774       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T02:43:47Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 02:43:47.240833       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 02:43:47.240878       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 02:43:47.240889       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 02:43:47.241004       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1119 02:43:47.641132       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1119 02:43:47.641157       1 metrics.go:72] Registering metrics
	I1119 02:43:47.641198       1 controller.go:711] "Syncing nftables rules"
	I1119 02:43:57.240458       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1119 02:43:57.240498       1 main.go:301] handling current node
	I1119 02:44:07.242546       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1119 02:44:07.243161       1 main.go:301] handling current node
	I1119 02:44:17.240570       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1119 02:44:17.240616       1 main.go:301] handling current node
	
	
	==> kube-apiserver [52e10aa72ed87bdafda6e448ab0fe9236452ea9f877e2c66f9761af96e094140] <==
	I1119 02:43:46.196743       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 02:43:46.244143       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1119 02:43:46.253526       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1119 02:43:46.253614       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1119 02:43:46.253755       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1119 02:43:46.253850       1 shared_informer.go:318] Caches are synced for configmaps
	I1119 02:43:46.253531       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1119 02:43:46.254608       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1119 02:43:46.254642       1 aggregator.go:166] initial CRD sync complete...
	I1119 02:43:46.254650       1 autoregister_controller.go:141] Starting autoregister controller
	I1119 02:43:46.254656       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1119 02:43:46.254663       1 cache.go:39] Caches are synced for autoregister controller
	E1119 02:43:46.259298       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1119 02:43:46.273499       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1119 02:43:47.048889       1 controller.go:624] quota admission added evaluator for: namespaces
	I1119 02:43:47.078934       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1119 02:43:47.095382       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1119 02:43:47.102719       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1119 02:43:47.109403       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1119 02:43:47.144372       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.108.16.66"}
	I1119 02:43:47.146688       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 02:43:47.159406       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.44.239"}
	I1119 02:43:59.292757       1 controller.go:624] quota admission added evaluator for: endpoints
	I1119 02:43:59.441786       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1119 02:43:59.543647       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [698573dc69a8a06012cd23a1989bd77a62894912ddd2392fb3c8adab817e74a2] <==
	I1119 02:43:59.375581       1 shared_informer.go:318] Caches are synced for stateful set
	I1119 02:43:59.411929       1 shared_informer.go:318] Caches are synced for persistent volume
	I1119 02:43:59.416268       1 shared_informer.go:318] Caches are synced for ephemeral
	I1119 02:43:59.438570       1 shared_informer.go:318] Caches are synced for attach detach
	I1119 02:43:59.438590       1 shared_informer.go:318] Caches are synced for expand
	I1119 02:43:59.443955       1 shared_informer.go:318] Caches are synced for resource quota
	I1119 02:43:59.548201       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-5f989dc9cf to 1"
	I1119 02:43:59.548340       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8694d4445c to 1"
	I1119 02:43:59.746937       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-mshqj"
	I1119 02:43:59.749136       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-4jn6m"
	I1119 02:43:59.755834       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="208.07463ms"
	I1119 02:43:59.757919       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="210.141586ms"
	I1119 02:43:59.762361       1 shared_informer.go:318] Caches are synced for garbage collector
	I1119 02:43:59.768547       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="10.174262ms"
	I1119 02:43:59.768631       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="46.237µs"
	I1119 02:43:59.770582       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="14.205953ms"
	I1119 02:43:59.770679       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="51.945µs"
	I1119 02:43:59.776410       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="40.708µs"
	I1119 02:43:59.787790       1 shared_informer.go:318] Caches are synced for garbage collector
	I1119 02:43:59.787813       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1119 02:44:07.680387       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="141.155183ms"
	I1119 02:44:07.680648       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="115.957µs"
	I1119 02:44:10.542570       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="66.841µs"
	I1119 02:44:11.545558       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="99.528µs"
	I1119 02:44:12.550825       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="128.918µs"
	
	
	==> kube-proxy [de11ec22f706c83cf86b25166cee8deb1a767a0d7161e431ab4ff464ea56370e] <==
	I1119 02:43:46.814188       1 server_others.go:69] "Using iptables proxy"
	I1119 02:43:46.823184       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1119 02:43:46.843635       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 02:43:46.846784       1 server_others.go:152] "Using iptables Proxier"
	I1119 02:43:46.846829       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1119 02:43:46.846837       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1119 02:43:46.846880       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1119 02:43:46.847130       1 server.go:846] "Version info" version="v1.28.0"
	I1119 02:43:46.847143       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 02:43:46.849616       1 config.go:315] "Starting node config controller"
	I1119 02:43:46.849645       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1119 02:43:46.849961       1 config.go:188] "Starting service config controller"
	I1119 02:43:46.849984       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1119 02:43:46.849963       1 config.go:97] "Starting endpoint slice config controller"
	I1119 02:43:46.850025       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1119 02:43:46.949931       1 shared_informer.go:318] Caches are synced for node config
	I1119 02:43:46.951075       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1119 02:43:46.951088       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [0b21b4a61c9e39b222029f13c6ca3c909e31c027914e269966be2589940c1b05] <==
	I1119 02:43:44.488625       1 serving.go:348] Generated self-signed cert in-memory
	W1119 02:43:46.176539       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1119 02:43:46.176579       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1119 02:43:46.176621       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1119 02:43:46.176645       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1119 02:43:46.202478       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1119 02:43:46.202503       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 02:43:46.203777       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 02:43:46.203816       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1119 02:43:46.204682       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1119 02:43:46.204713       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1119 02:43:46.304057       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 19 02:43:48 old-k8s-version-987573 kubelet[732]: E1119 02:43:48.092584     732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/38b8c793-304e-42c1-b2a0-ecd1032a5962-config-volume podName:38b8c793-304e-42c1-b2a0-ecd1032a5962 nodeName:}" failed. No retries permitted until 2025-11-19 02:43:50.092567656 +0000 UTC m=+6.746470075 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/38b8c793-304e-42c1-b2a0-ecd1032a5962-config-volume") pod "coredns-5dd5756b68-djd8r" (UID: "38b8c793-304e-42c1-b2a0-ecd1032a5962") : object "kube-system"/"coredns" not registered
	Nov 19 02:43:48 old-k8s-version-987573 kubelet[732]: E1119 02:43:48.193722     732 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Nov 19 02:43:48 old-k8s-version-987573 kubelet[732]: E1119 02:43:48.193750     732 projected.go:198] Error preparing data for projected volume kube-api-access-rj25l for pod default/busybox: object "default"/"kube-root-ca.crt" not registered
	Nov 19 02:43:48 old-k8s-version-987573 kubelet[732]: E1119 02:43:48.193802     732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9c204876-422a-41f9-9047-80e08d35da45-kube-api-access-rj25l podName:9c204876-422a-41f9-9047-80e08d35da45 nodeName:}" failed. No retries permitted until 2025-11-19 02:43:50.193788067 +0000 UTC m=+6.847690487 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-rj25l" (UniqueName: "kubernetes.io/projected/9c204876-422a-41f9-9047-80e08d35da45-kube-api-access-rj25l") pod "busybox" (UID: "9c204876-422a-41f9-9047-80e08d35da45") : object "default"/"kube-root-ca.crt" not registered
	Nov 19 02:43:55 old-k8s-version-987573 kubelet[732]: I1119 02:43:55.311083     732 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 19 02:43:59 old-k8s-version-987573 kubelet[732]: I1119 02:43:59.753137     732 topology_manager.go:215] "Topology Admit Handler" podUID="5857a23b-a4e9-46c7-8df9-28cdb04e7452" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-mshqj"
	Nov 19 02:43:59 old-k8s-version-987573 kubelet[732]: I1119 02:43:59.758633     732 topology_manager.go:215] "Topology Admit Handler" podUID="52a583d3-3a23-4e43-b437-210666e9d26a" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-4jn6m"
	Nov 19 02:43:59 old-k8s-version-987573 kubelet[732]: I1119 02:43:59.859634     732 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/5857a23b-a4e9-46c7-8df9-28cdb04e7452-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-mshqj\" (UID: \"5857a23b-a4e9-46c7-8df9-28cdb04e7452\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-mshqj"
	Nov 19 02:43:59 old-k8s-version-987573 kubelet[732]: I1119 02:43:59.859681     732 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qblx9\" (UniqueName: \"kubernetes.io/projected/52a583d3-3a23-4e43-b437-210666e9d26a-kube-api-access-qblx9\") pod \"dashboard-metrics-scraper-5f989dc9cf-4jn6m\" (UID: \"52a583d3-3a23-4e43-b437-210666e9d26a\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4jn6m"
	Nov 19 02:43:59 old-k8s-version-987573 kubelet[732]: I1119 02:43:59.859711     732 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/52a583d3-3a23-4e43-b437-210666e9d26a-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-4jn6m\" (UID: \"52a583d3-3a23-4e43-b437-210666e9d26a\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4jn6m"
	Nov 19 02:43:59 old-k8s-version-987573 kubelet[732]: I1119 02:43:59.859903     732 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlbm6\" (UniqueName: \"kubernetes.io/projected/5857a23b-a4e9-46c7-8df9-28cdb04e7452-kube-api-access-zlbm6\") pod \"kubernetes-dashboard-8694d4445c-mshqj\" (UID: \"5857a23b-a4e9-46c7-8df9-28cdb04e7452\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-mshqj"
	Nov 19 02:44:10 old-k8s-version-987573 kubelet[732]: I1119 02:44:10.526604     732 scope.go:117] "RemoveContainer" containerID="09c387d074a5772ce6728e2da9a5c2f5ec89c2549480300c3fd742b2ab4f32d0"
	Nov 19 02:44:10 old-k8s-version-987573 kubelet[732]: I1119 02:44:10.542122     732 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-mshqj" podStartSLOduration=4.873608043 podCreationTimestamp="2025-11-19 02:43:59 +0000 UTC" firstStartedPulling="2025-11-19 02:44:00.078181732 +0000 UTC m=+16.732084166" lastFinishedPulling="2025-11-19 02:44:06.74663661 +0000 UTC m=+23.400539038" observedRunningTime="2025-11-19 02:44:07.541761794 +0000 UTC m=+24.195664234" watchObservedRunningTime="2025-11-19 02:44:10.542062915 +0000 UTC m=+27.195965356"
	Nov 19 02:44:11 old-k8s-version-987573 kubelet[732]: I1119 02:44:11.531652     732 scope.go:117] "RemoveContainer" containerID="09c387d074a5772ce6728e2da9a5c2f5ec89c2549480300c3fd742b2ab4f32d0"
	Nov 19 02:44:11 old-k8s-version-987573 kubelet[732]: I1119 02:44:11.531806     732 scope.go:117] "RemoveContainer" containerID="c8e88b5f77554f0c0105232fbd1aa6d9713330da86439ef7081b285a8151c78e"
	Nov 19 02:44:11 old-k8s-version-987573 kubelet[732]: E1119 02:44:11.532201     732 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-4jn6m_kubernetes-dashboard(52a583d3-3a23-4e43-b437-210666e9d26a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4jn6m" podUID="52a583d3-3a23-4e43-b437-210666e9d26a"
	Nov 19 02:44:12 old-k8s-version-987573 kubelet[732]: I1119 02:44:12.536980     732 scope.go:117] "RemoveContainer" containerID="c8e88b5f77554f0c0105232fbd1aa6d9713330da86439ef7081b285a8151c78e"
	Nov 19 02:44:12 old-k8s-version-987573 kubelet[732]: E1119 02:44:12.537348     732 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-4jn6m_kubernetes-dashboard(52a583d3-3a23-4e43-b437-210666e9d26a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4jn6m" podUID="52a583d3-3a23-4e43-b437-210666e9d26a"
	Nov 19 02:44:17 old-k8s-version-987573 kubelet[732]: I1119 02:44:17.550149     732 scope.go:117] "RemoveContainer" containerID="9838e80b4a11306fc1d1ad1687e1efac6e8267ea4a072326986fd003834c2d07"
	Nov 19 02:44:20 old-k8s-version-987573 kubelet[732]: I1119 02:44:20.061397     732 scope.go:117] "RemoveContainer" containerID="c8e88b5f77554f0c0105232fbd1aa6d9713330da86439ef7081b285a8151c78e"
	Nov 19 02:44:20 old-k8s-version-987573 kubelet[732]: E1119 02:44:20.061731     732 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-4jn6m_kubernetes-dashboard(52a583d3-3a23-4e43-b437-210666e9d26a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4jn6m" podUID="52a583d3-3a23-4e43-b437-210666e9d26a"
	Nov 19 02:44:21 old-k8s-version-987573 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 19 02:44:21 old-k8s-version-987573 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 19 02:44:21 old-k8s-version-987573 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 19 02:44:21 old-k8s-version-987573 systemd[1]: kubelet.service: Consumed 1.214s CPU time.
	
	
	==> kubernetes-dashboard [a0016258b0d08479349678ea97b542cd6bed29e5be0daa43e282fc63d368df4b] <==
	2025/11/19 02:44:06 Using namespace: kubernetes-dashboard
	2025/11/19 02:44:06 Using in-cluster config to connect to apiserver
	2025/11/19 02:44:06 Using secret token for csrf signing
	2025/11/19 02:44:06 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/19 02:44:06 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/19 02:44:06 Successful initial request to the apiserver, version: v1.28.0
	2025/11/19 02:44:06 Generating JWE encryption key
	2025/11/19 02:44:06 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/19 02:44:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/19 02:44:07 Initializing JWE encryption key from synchronized object
	2025/11/19 02:44:07 Creating in-cluster Sidecar client
	2025/11/19 02:44:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/19 02:44:07 Serving insecurely on HTTP port: 9090
	2025/11/19 02:44:06 Starting overwatch
	
	
	==> storage-provisioner [2077593bc532cc1d14a90e1072ea50812cb23766a988f2cc7d1e6b8f14c3b0ee] <==
	I1119 02:44:17.605538       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1119 02:44:17.615971       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1119 02:44:17.616040       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	
	==> storage-provisioner [9838e80b4a11306fc1d1ad1687e1efac6e8267ea4a072326986fd003834c2d07] <==
	I1119 02:43:46.786538       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1119 02:44:16.789009       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-987573 -n old-k8s-version-987573
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-987573 -n old-k8s-version-987573: exit status 2 (314.132886ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-987573 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (5.91s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (5.65s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-811173 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p embed-certs-811173 --alsologtostderr -v=1: exit status 80 (1.928098385s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-811173 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 02:44:47.794489  333404 out.go:360] Setting OutFile to fd 1 ...
	I1119 02:44:47.794607  333404 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:44:47.794613  333404 out.go:374] Setting ErrFile to fd 2...
	I1119 02:44:47.794620  333404 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:44:47.795038  333404 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-11126/.minikube/bin
	I1119 02:44:47.795369  333404 out.go:368] Setting JSON to false
	I1119 02:44:47.795559  333404 mustload.go:66] Loading cluster: embed-certs-811173
	I1119 02:44:47.796482  333404 config.go:182] Loaded profile config "embed-certs-811173": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:44:47.797300  333404 cli_runner.go:164] Run: docker container inspect embed-certs-811173 --format={{.State.Status}}
	I1119 02:44:47.819640  333404 host.go:66] Checking if "embed-certs-811173" exists ...
	I1119 02:44:47.819994  333404 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 02:44:47.893486  333404 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:86 SystemTime:2025-11-19 02:44:47.881947464 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652060160 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 02:44:47.894295  333404 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-811173 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1119 02:44:47.895895  333404 out.go:179] * Pausing node embed-certs-811173 ... 
	I1119 02:44:47.897229  333404 host.go:66] Checking if "embed-certs-811173" exists ...
	I1119 02:44:47.897599  333404 ssh_runner.go:195] Run: systemctl --version
	I1119 02:44:47.897661  333404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-811173
	I1119 02:44:47.925645  333404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/embed-certs-811173/id_rsa Username:docker}
	I1119 02:44:48.028751  333404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 02:44:48.044293  333404 pause.go:52] kubelet running: true
	I1119 02:44:48.044358  333404 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1119 02:44:48.253494  333404 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1119 02:44:48.253606  333404 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1119 02:44:48.332157  333404 cri.go:89] found id: "93226249c61ba23fe7678a2d4d31aafafa7d131e89b8004db43a8fef6f648222"
	I1119 02:44:48.332186  333404 cri.go:89] found id: "b2cdca1146afc3b622739f950e64864d26eba81ead7099474120baed23ee6f0e"
	I1119 02:44:48.332192  333404 cri.go:89] found id: "c54f63e61cd62e1b142da359df33d91caf60f07fa0b8e3232b02d81672c144e4"
	I1119 02:44:48.332196  333404 cri.go:89] found id: "aa517f916a402e98a0e426261b2e8d3cf00858150d67405fda67eaf37bb6e901"
	I1119 02:44:48.332200  333404 cri.go:89] found id: "b23c1eb2d226e72679956724897a9a7086eebbdc3ef47d3921d153ed58b2e05d"
	I1119 02:44:48.332205  333404 cri.go:89] found id: "e0994ea94767873e5f7aa16af71ef5155fc15391a563da35948cadb1520f80bd"
	I1119 02:44:48.332209  333404 cri.go:89] found id: "b9603bf135a48a7fd7f1a7df00bc5ac2ca325854631a2e9109eebbe9c579c3fc"
	I1119 02:44:48.332213  333404 cri.go:89] found id: "05974f8fe2ed9b3af8b149d271de0fd120542bca0e181f00cc290f0684748003"
	I1119 02:44:48.332217  333404 cri.go:89] found id: "706b2dbda2d38ebc2ca3e61f6b17e96a3d75c375c204a2bcebbf88ede678a129"
	I1119 02:44:48.332236  333404 cri.go:89] found id: "684aff13b7f3d8b49f89e310e16d2708b26afd7ddd0b11590c24b5ee6fb5638d"
	I1119 02:44:48.332246  333404 cri.go:89] found id: "cca02da00a4676afe504adf5be3a8411759a7aeae1cf8b33d87c2969c8b35ee0"
	I1119 02:44:48.332251  333404 cri.go:89] found id: ""
	I1119 02:44:48.332294  333404 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 02:44:48.343872  333404 retry.go:31] will retry after 305.797013ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:44:48Z" level=error msg="open /run/runc: no such file or directory"
	I1119 02:44:48.650427  333404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 02:44:48.663156  333404 pause.go:52] kubelet running: false
	I1119 02:44:48.663197  333404 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1119 02:44:48.808364  333404 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1119 02:44:48.808491  333404 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1119 02:44:48.883369  333404 cri.go:89] found id: "93226249c61ba23fe7678a2d4d31aafafa7d131e89b8004db43a8fef6f648222"
	I1119 02:44:48.883396  333404 cri.go:89] found id: "b2cdca1146afc3b622739f950e64864d26eba81ead7099474120baed23ee6f0e"
	I1119 02:44:48.883402  333404 cri.go:89] found id: "c54f63e61cd62e1b142da359df33d91caf60f07fa0b8e3232b02d81672c144e4"
	I1119 02:44:48.883407  333404 cri.go:89] found id: "aa517f916a402e98a0e426261b2e8d3cf00858150d67405fda67eaf37bb6e901"
	I1119 02:44:48.883410  333404 cri.go:89] found id: "b23c1eb2d226e72679956724897a9a7086eebbdc3ef47d3921d153ed58b2e05d"
	I1119 02:44:48.883416  333404 cri.go:89] found id: "e0994ea94767873e5f7aa16af71ef5155fc15391a563da35948cadb1520f80bd"
	I1119 02:44:48.883419  333404 cri.go:89] found id: "b9603bf135a48a7fd7f1a7df00bc5ac2ca325854631a2e9109eebbe9c579c3fc"
	I1119 02:44:48.883423  333404 cri.go:89] found id: "05974f8fe2ed9b3af8b149d271de0fd120542bca0e181f00cc290f0684748003"
	I1119 02:44:48.883426  333404 cri.go:89] found id: "706b2dbda2d38ebc2ca3e61f6b17e96a3d75c375c204a2bcebbf88ede678a129"
	I1119 02:44:48.883452  333404 cri.go:89] found id: "684aff13b7f3d8b49f89e310e16d2708b26afd7ddd0b11590c24b5ee6fb5638d"
	I1119 02:44:48.883457  333404 cri.go:89] found id: "cca02da00a4676afe504adf5be3a8411759a7aeae1cf8b33d87c2969c8b35ee0"
	I1119 02:44:48.883460  333404 cri.go:89] found id: ""
	I1119 02:44:48.883527  333404 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 02:44:48.894739  333404 retry.go:31] will retry after 513.46677ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:44:48Z" level=error msg="open /run/runc: no such file or directory"
	I1119 02:44:49.408478  333404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 02:44:49.421041  333404 pause.go:52] kubelet running: false
	I1119 02:44:49.421098  333404 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1119 02:44:49.568541  333404 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1119 02:44:49.568624  333404 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1119 02:44:49.633849  333404 cri.go:89] found id: "93226249c61ba23fe7678a2d4d31aafafa7d131e89b8004db43a8fef6f648222"
	I1119 02:44:49.633869  333404 cri.go:89] found id: "b2cdca1146afc3b622739f950e64864d26eba81ead7099474120baed23ee6f0e"
	I1119 02:44:49.633873  333404 cri.go:89] found id: "c54f63e61cd62e1b142da359df33d91caf60f07fa0b8e3232b02d81672c144e4"
	I1119 02:44:49.633882  333404 cri.go:89] found id: "aa517f916a402e98a0e426261b2e8d3cf00858150d67405fda67eaf37bb6e901"
	I1119 02:44:49.633885  333404 cri.go:89] found id: "b23c1eb2d226e72679956724897a9a7086eebbdc3ef47d3921d153ed58b2e05d"
	I1119 02:44:49.633888  333404 cri.go:89] found id: "e0994ea94767873e5f7aa16af71ef5155fc15391a563da35948cadb1520f80bd"
	I1119 02:44:49.633891  333404 cri.go:89] found id: "b9603bf135a48a7fd7f1a7df00bc5ac2ca325854631a2e9109eebbe9c579c3fc"
	I1119 02:44:49.633893  333404 cri.go:89] found id: "05974f8fe2ed9b3af8b149d271de0fd120542bca0e181f00cc290f0684748003"
	I1119 02:44:49.633896  333404 cri.go:89] found id: "706b2dbda2d38ebc2ca3e61f6b17e96a3d75c375c204a2bcebbf88ede678a129"
	I1119 02:44:49.633905  333404 cri.go:89] found id: "684aff13b7f3d8b49f89e310e16d2708b26afd7ddd0b11590c24b5ee6fb5638d"
	I1119 02:44:49.633910  333404 cri.go:89] found id: "cca02da00a4676afe504adf5be3a8411759a7aeae1cf8b33d87c2969c8b35ee0"
	I1119 02:44:49.633912  333404 cri.go:89] found id: ""
	I1119 02:44:49.633945  333404 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 02:44:49.648321  333404 out.go:203] 
	W1119 02:44:49.649523  333404 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:44:49Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:44:49Z" level=error msg="open /run/runc: no such file or directory"
	
	W1119 02:44:49.649566  333404 out.go:285] * 
	* 
	W1119 02:44:49.653609  333404 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1119 02:44:49.654704  333404 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p embed-certs-811173 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-811173
helpers_test.go:243: (dbg) docker inspect embed-certs-811173:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f59ac2b4a856949cb2e1ed6807a501c825bed5125b05f7ff483858655e957668",
	        "Created": "2025-11-19T02:42:39.275670124Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 320909,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T02:43:49.479375137Z",
	            "FinishedAt": "2025-11-19T02:43:48.652309218Z"
	        },
	        "Image": "sha256:a368e3d71517ce17114afb6c9921965419df972dd0e2d32a9973a8946f0910a3",
	        "ResolvConfPath": "/var/lib/docker/containers/f59ac2b4a856949cb2e1ed6807a501c825bed5125b05f7ff483858655e957668/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f59ac2b4a856949cb2e1ed6807a501c825bed5125b05f7ff483858655e957668/hostname",
	        "HostsPath": "/var/lib/docker/containers/f59ac2b4a856949cb2e1ed6807a501c825bed5125b05f7ff483858655e957668/hosts",
	        "LogPath": "/var/lib/docker/containers/f59ac2b4a856949cb2e1ed6807a501c825bed5125b05f7ff483858655e957668/f59ac2b4a856949cb2e1ed6807a501c825bed5125b05f7ff483858655e957668-json.log",
	        "Name": "/embed-certs-811173",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-811173:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-811173",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f59ac2b4a856949cb2e1ed6807a501c825bed5125b05f7ff483858655e957668",
	                "LowerDir": "/var/lib/docker/overlay2/07d90f0d6a437038a8f0a347be8e1b31b31817fee59231702439e2ea962044d8-init/diff:/var/lib/docker/overlay2/10dcbd04dd53463a9d8dc947ef22df9efa1b3a4d07707317c3869fa39d5b22a7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/07d90f0d6a437038a8f0a347be8e1b31b31817fee59231702439e2ea962044d8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/07d90f0d6a437038a8f0a347be8e1b31b31817fee59231702439e2ea962044d8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/07d90f0d6a437038a8f0a347be8e1b31b31817fee59231702439e2ea962044d8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-811173",
	                "Source": "/var/lib/docker/volumes/embed-certs-811173/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-811173",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-811173",
	                "name.minikube.sigs.k8s.io": "embed-certs-811173",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "c973d9ddf5ca4582c6d9a3e3426352bb27d0718cc9c17a004d4d318c5d5344b0",
	            "SandboxKey": "/var/run/docker/netns/c973d9ddf5ca",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33113"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33114"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33117"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33115"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33116"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-811173": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3129c4b605594e1d463b2d85e5ed79f025bb6cff93cf80cdce990db8936b5a9c",
	                    "EndpointID": "9a5b314c12870073b566c60829f8bb3d30b754e8bab543728cfe758761427d3f",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "7a:3f:06:cd:e5:7d",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-811173",
	                        "f59ac2b4a856"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-811173 -n embed-certs-811173
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-811173 -n embed-certs-811173: exit status 2 (339.355039ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-811173 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-811173 logs -n 25: (1.115922122s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p disable-driver-mounts-682232                                                                                                                                                                                                               │ disable-driver-mounts-682232 │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │ 19 Nov 25 02:42 UTC │
	│ start   │ -p default-k8s-diff-port-167150 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-167150 │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │ 19 Nov 25 02:43 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-987573 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-987573       │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │                     │
	│ stop    │ -p old-k8s-version-987573 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-987573       │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:43 UTC │
	│ addons  │ enable metrics-server -p embed-certs-811173 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-811173           │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │                     │
	│ stop    │ -p embed-certs-811173 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-811173           │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:43 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-167150 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-167150 │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-167150 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-167150 │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:43 UTC │
	│ addons  │ enable metrics-server -p no-preload-837474 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-837474            │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │                     │
	│ addons  │ enable dashboard -p old-k8s-version-987573 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-987573       │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:43 UTC │
	│ start   │ -p old-k8s-version-987573 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-987573       │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:44 UTC │
	│ stop    │ -p no-preload-837474 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-837474            │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:43 UTC │
	│ addons  │ enable dashboard -p embed-certs-811173 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-811173           │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:43 UTC │
	│ start   │ -p embed-certs-811173 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-811173           │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:44 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-167150 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-167150 │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:43 UTC │
	│ start   │ -p default-k8s-diff-port-167150 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-167150 │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:44 UTC │
	│ addons  │ enable dashboard -p no-preload-837474 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-837474            │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:43 UTC │
	│ start   │ -p no-preload-837474 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-837474            │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:44 UTC │
	│ image   │ old-k8s-version-987573 image list --format=json                                                                                                                                                                                               │ old-k8s-version-987573       │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │ 19 Nov 25 02:44 UTC │
	│ pause   │ -p old-k8s-version-987573 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-987573       │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │                     │
	│ delete  │ -p old-k8s-version-987573                                                                                                                                                                                                                     │ old-k8s-version-987573       │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │ 19 Nov 25 02:44 UTC │
	│ delete  │ -p old-k8s-version-987573                                                                                                                                                                                                                     │ old-k8s-version-987573       │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │ 19 Nov 25 02:44 UTC │
	│ start   │ -p newest-cni-956139 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-956139            │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │                     │
	│ image   │ embed-certs-811173 image list --format=json                                                                                                                                                                                                   │ embed-certs-811173           │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │ 19 Nov 25 02:44 UTC │
	│ pause   │ -p embed-certs-811173 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-811173           │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 02:44:29
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 02:44:29.891671  330644 out.go:360] Setting OutFile to fd 1 ...
	I1119 02:44:29.891773  330644 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:44:29.891782  330644 out.go:374] Setting ErrFile to fd 2...
	I1119 02:44:29.891786  330644 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:44:29.892013  330644 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-11126/.minikube/bin
	I1119 02:44:29.892489  330644 out.go:368] Setting JSON to false
	I1119 02:44:29.893932  330644 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5217,"bootTime":1763515053,"procs":351,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 02:44:29.894009  330644 start.go:143] virtualization: kvm guest
	I1119 02:44:29.896106  330644 out.go:179] * [newest-cni-956139] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1119 02:44:29.897349  330644 out.go:179]   - MINIKUBE_LOCATION=21924
	I1119 02:44:29.897381  330644 notify.go:221] Checking for updates...
	I1119 02:44:29.899649  330644 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 02:44:29.900639  330644 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21924-11126/kubeconfig
	I1119 02:44:29.901703  330644 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-11126/.minikube
	I1119 02:44:29.902810  330644 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1119 02:44:29.903920  330644 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 02:44:29.905455  330644 config.go:182] Loaded profile config "default-k8s-diff-port-167150": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:44:29.905620  330644 config.go:182] Loaded profile config "embed-certs-811173": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:44:29.905735  330644 config.go:182] Loaded profile config "no-preload-837474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:44:29.905864  330644 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 02:44:29.930269  330644 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1119 02:44:29.930390  330644 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 02:44:29.990852  330644 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-19 02:44:29.980528215 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652060160 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 02:44:29.990974  330644 docker.go:319] overlay module found
	I1119 02:44:29.992680  330644 out.go:179] * Using the docker driver based on user configuration
	I1119 02:44:29.993882  330644 start.go:309] selected driver: docker
	I1119 02:44:29.993897  330644 start.go:930] validating driver "docker" against <nil>
	I1119 02:44:29.993908  330644 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 02:44:29.994485  330644 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 02:44:30.055174  330644 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-19 02:44:30.045301349 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652060160 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 02:44:30.055367  330644 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1119 02:44:30.055398  330644 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1119 02:44:30.055690  330644 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1119 02:44:30.057878  330644 out.go:179] * Using Docker driver with root privileges
	I1119 02:44:30.059068  330644 cni.go:84] Creating CNI manager for ""
	I1119 02:44:30.059130  330644 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 02:44:30.059141  330644 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1119 02:44:30.059196  330644 start.go:353] cluster config:
	{Name:newest-cni-956139 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-956139 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:44:30.060543  330644 out.go:179] * Starting "newest-cni-956139" primary control-plane node in "newest-cni-956139" cluster
	I1119 02:44:30.061681  330644 cache.go:134] Beginning downloading kic base image for docker with crio
	I1119 02:44:30.062975  330644 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1119 02:44:30.064114  330644 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 02:44:30.064143  330644 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21924-11126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1119 02:44:30.064167  330644 cache.go:65] Caching tarball of preloaded images
	I1119 02:44:30.064199  330644 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1119 02:44:30.064251  330644 preload.go:238] Found /home/jenkins/minikube-integration/21924-11126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1119 02:44:30.064266  330644 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1119 02:44:30.064364  330644 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/config.json ...
	I1119 02:44:30.064387  330644 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/config.json: {Name:mk5f6a602a7486c803f28ee981bc4fb72f30089f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:44:30.086997  330644 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1119 02:44:30.087020  330644 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1119 02:44:30.087033  330644 cache.go:243] Successfully downloaded all kic artifacts
	I1119 02:44:30.087059  330644 start.go:360] acquireMachinesLock for newest-cni-956139: {Name:mk15a132b2574a22e8a886ba5601ed901f63d00c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 02:44:30.087146  330644 start.go:364] duration metric: took 69.531µs to acquireMachinesLock for "newest-cni-956139"
	I1119 02:44:30.087169  330644 start.go:93] Provisioning new machine with config: &{Name:newest-cni-956139 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-956139 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 02:44:30.087250  330644 start.go:125] createHost starting for "" (driver="docker")
	W1119 02:44:25.920223  322722 pod_ready.go:104] pod "coredns-66bc5c9577-44bdr" is not "Ready", error: <nil>
	W1119 02:44:28.420250  322722 pod_ready.go:104] pod "coredns-66bc5c9577-44bdr" is not "Ready", error: <nil>
	W1119 02:44:30.420774  322722 pod_ready.go:104] pod "coredns-66bc5c9577-44bdr" is not "Ready", error: <nil>
	W1119 02:44:29.634283  320707 pod_ready.go:104] pod "coredns-66bc5c9577-6zqr2" is not "Ready", error: <nil>
	W1119 02:44:31.634456  320707 pod_ready.go:104] pod "coredns-66bc5c9577-6zqr2" is not "Ready", error: <nil>
	W1119 02:44:34.134853  320707 pod_ready.go:104] pod "coredns-66bc5c9577-6zqr2" is not "Ready", error: <nil>
	W1119 02:44:29.824614  321785 pod_ready.go:104] pod "coredns-66bc5c9577-bht2q" is not "Ready", error: <nil>
	W1119 02:44:31.825210  321785 pod_ready.go:104] pod "coredns-66bc5c9577-bht2q" is not "Ready", error: <nil>
	W1119 02:44:33.861933  321785 pod_ready.go:104] pod "coredns-66bc5c9577-bht2q" is not "Ready", error: <nil>
	I1119 02:44:30.090250  330644 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1119 02:44:30.090540  330644 start.go:159] libmachine.API.Create for "newest-cni-956139" (driver="docker")
	I1119 02:44:30.090580  330644 client.go:173] LocalClient.Create starting
	I1119 02:44:30.090711  330644 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem
	I1119 02:44:30.090762  330644 main.go:143] libmachine: Decoding PEM data...
	I1119 02:44:30.090788  330644 main.go:143] libmachine: Parsing certificate...
	I1119 02:44:30.090868  330644 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21924-11126/.minikube/certs/cert.pem
	I1119 02:44:30.090897  330644 main.go:143] libmachine: Decoding PEM data...
	I1119 02:44:30.090911  330644 main.go:143] libmachine: Parsing certificate...
	I1119 02:44:30.091311  330644 cli_runner.go:164] Run: docker network inspect newest-cni-956139 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1119 02:44:30.108825  330644 cli_runner.go:211] docker network inspect newest-cni-956139 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1119 02:44:30.108874  330644 network_create.go:284] running [docker network inspect newest-cni-956139] to gather additional debugging logs...
	I1119 02:44:30.108888  330644 cli_runner.go:164] Run: docker network inspect newest-cni-956139
	W1119 02:44:30.125848  330644 cli_runner.go:211] docker network inspect newest-cni-956139 returned with exit code 1
	I1119 02:44:30.125873  330644 network_create.go:287] error running [docker network inspect newest-cni-956139]: docker network inspect newest-cni-956139: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-956139 not found
	I1119 02:44:30.125887  330644 network_create.go:289] output of [docker network inspect newest-cni-956139]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-956139 not found
	
	** /stderr **
	I1119 02:44:30.126008  330644 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 02:44:30.145372  330644 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-84ce244e4c23 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:16:55:7c:db:e3:4e} reservation:<nil>}
	I1119 02:44:30.146006  330644 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-70e7d73f86d8 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:8a:64:3f:46:8e:7a} reservation:<nil>}
	I1119 02:44:30.146778  330644 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-d7ef477b5a23 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:fa:eb:22:b3:62:92} reservation:<nil>}
	I1119 02:44:30.147612  330644 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ee2320}
	I1119 02:44:30.147633  330644 network_create.go:124] attempt to create docker network newest-cni-956139 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1119 02:44:30.147689  330644 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-956139 newest-cni-956139
	I1119 02:44:30.194747  330644 network_create.go:108] docker network newest-cni-956139 192.168.76.0/24 created
	I1119 02:44:30.194772  330644 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-956139" container
	I1119 02:44:30.194838  330644 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1119 02:44:30.212175  330644 cli_runner.go:164] Run: docker volume create newest-cni-956139 --label name.minikube.sigs.k8s.io=newest-cni-956139 --label created_by.minikube.sigs.k8s.io=true
	I1119 02:44:30.229588  330644 oci.go:103] Successfully created a docker volume newest-cni-956139
	I1119 02:44:30.229664  330644 cli_runner.go:164] Run: docker run --rm --name newest-cni-956139-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-956139 --entrypoint /usr/bin/test -v newest-cni-956139:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -d /var/lib
	I1119 02:44:30.612069  330644 oci.go:107] Successfully prepared a docker volume newest-cni-956139
	I1119 02:44:30.612124  330644 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 02:44:30.612132  330644 kic.go:194] Starting extracting preloaded images to volume ...
	I1119 02:44:30.612187  330644 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21924-11126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-956139:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir
	W1119 02:44:32.919409  322722 pod_ready.go:104] pod "coredns-66bc5c9577-44bdr" is not "Ready", error: <nil>
	W1119 02:44:34.920166  322722 pod_ready.go:104] pod "coredns-66bc5c9577-44bdr" is not "Ready", error: <nil>
	I1119 02:44:34.646141  320707 pod_ready.go:94] pod "coredns-66bc5c9577-6zqr2" is "Ready"
	I1119 02:44:34.646170  320707 pod_ready.go:86] duration metric: took 35.016957338s for pod "coredns-66bc5c9577-6zqr2" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:34.648819  320707 pod_ready.go:83] waiting for pod "etcd-embed-certs-811173" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:34.831828  320707 pod_ready.go:94] pod "etcd-embed-certs-811173" is "Ready"
	I1119 02:44:34.831852  320707 pod_ready.go:86] duration metric: took 183.006168ms for pod "etcd-embed-certs-811173" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:34.834239  320707 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-811173" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:34.837643  320707 pod_ready.go:94] pod "kube-apiserver-embed-certs-811173" is "Ready"
	I1119 02:44:34.837663  320707 pod_ready.go:86] duration metric: took 3.400351ms for pod "kube-apiserver-embed-certs-811173" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:34.839329  320707 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-811173" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:34.842652  320707 pod_ready.go:94] pod "kube-controller-manager-embed-certs-811173" is "Ready"
	I1119 02:44:34.842670  320707 pod_ready.go:86] duration metric: took 3.319388ms for pod "kube-controller-manager-embed-certs-811173" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:35.032627  320707 pod_ready.go:83] waiting for pod "kube-proxy-s5bzz" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:35.432934  320707 pod_ready.go:94] pod "kube-proxy-s5bzz" is "Ready"
	I1119 02:44:35.432959  320707 pod_ready.go:86] duration metric: took 400.306652ms for pod "kube-proxy-s5bzz" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:35.633961  320707 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-811173" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:36.032469  320707 pod_ready.go:94] pod "kube-scheduler-embed-certs-811173" is "Ready"
	I1119 02:44:36.032499  320707 pod_ready.go:86] duration metric: took 398.480495ms for pod "kube-scheduler-embed-certs-811173" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:36.032511  320707 pod_ready.go:40] duration metric: took 36.406499301s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 02:44:36.080404  320707 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1119 02:44:36.082160  320707 out.go:179] * Done! kubectl is now configured to use "embed-certs-811173" cluster and "default" namespace by default
	I1119 02:44:34.960079  330644 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21924-11126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-956139:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir: (4.347852696s)
	I1119 02:44:34.960108  330644 kic.go:203] duration metric: took 4.347972861s to extract preloaded images to volume ...
	W1119 02:44:34.960206  330644 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1119 02:44:34.960254  330644 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1119 02:44:34.960300  330644 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1119 02:44:35.014083  330644 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-956139 --name newest-cni-956139 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-956139 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-956139 --network newest-cni-956139 --ip 192.168.76.2 --volume newest-cni-956139:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a
	I1119 02:44:35.325493  330644 cli_runner.go:164] Run: docker container inspect newest-cni-956139 --format={{.State.Running}}
	I1119 02:44:35.343669  330644 cli_runner.go:164] Run: docker container inspect newest-cni-956139 --format={{.State.Status}}
	I1119 02:44:35.361759  330644 cli_runner.go:164] Run: docker exec newest-cni-956139 stat /var/lib/dpkg/alternatives/iptables
	I1119 02:44:35.406925  330644 oci.go:144] the created container "newest-cni-956139" has a running status.
	I1119 02:44:35.406959  330644 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21924-11126/.minikube/machines/newest-cni-956139/id_rsa...
	I1119 02:44:35.779267  330644 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21924-11126/.minikube/machines/newest-cni-956139/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1119 02:44:35.805615  330644 cli_runner.go:164] Run: docker container inspect newest-cni-956139 --format={{.State.Status}}
	I1119 02:44:35.826512  330644 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1119 02:44:35.826530  330644 kic_runner.go:114] Args: [docker exec --privileged newest-cni-956139 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1119 02:44:35.871319  330644 cli_runner.go:164] Run: docker container inspect newest-cni-956139 --format={{.State.Status}}
	I1119 02:44:35.889991  330644 machine.go:94] provisionDockerMachine start ...
	I1119 02:44:35.890097  330644 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956139
	I1119 02:44:35.909789  330644 main.go:143] libmachine: Using SSH client type: native
	I1119 02:44:35.910136  330644 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1119 02:44:35.910158  330644 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 02:44:36.043778  330644 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-956139
	
	I1119 02:44:36.043805  330644 ubuntu.go:182] provisioning hostname "newest-cni-956139"
	I1119 02:44:36.043885  330644 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956139
	I1119 02:44:36.065697  330644 main.go:143] libmachine: Using SSH client type: native
	I1119 02:44:36.065904  330644 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1119 02:44:36.065918  330644 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-956139 && echo "newest-cni-956139" | sudo tee /etc/hostname
	I1119 02:44:36.211004  330644 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-956139
	
	I1119 02:44:36.211088  330644 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956139
	I1119 02:44:36.229392  330644 main.go:143] libmachine: Using SSH client type: native
	I1119 02:44:36.229616  330644 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1119 02:44:36.229635  330644 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-956139' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-956139/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-956139' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 02:44:36.359138  330644 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 02:44:36.359177  330644 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21924-11126/.minikube CaCertPath:/home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21924-11126/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21924-11126/.minikube}
	I1119 02:44:36.359210  330644 ubuntu.go:190] setting up certificates
	I1119 02:44:36.359219  330644 provision.go:84] configureAuth start
	I1119 02:44:36.359262  330644 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-956139
	I1119 02:44:36.381048  330644 provision.go:143] copyHostCerts
	I1119 02:44:36.381118  330644 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-11126/.minikube/key.pem, removing ...
	I1119 02:44:36.381134  330644 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-11126/.minikube/key.pem
	I1119 02:44:36.381241  330644 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21924-11126/.minikube/key.pem (1675 bytes)
	I1119 02:44:36.381393  330644 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-11126/.minikube/ca.pem, removing ...
	I1119 02:44:36.381407  330644 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-11126/.minikube/ca.pem
	I1119 02:44:36.381473  330644 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21924-11126/.minikube/ca.pem (1082 bytes)
	I1119 02:44:36.381598  330644 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-11126/.minikube/cert.pem, removing ...
	I1119 02:44:36.381613  330644 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-11126/.minikube/cert.pem
	I1119 02:44:36.381659  330644 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21924-11126/.minikube/cert.pem (1123 bytes)
	I1119 02:44:36.381762  330644 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21924-11126/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca-key.pem org=jenkins.newest-cni-956139 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-956139]
	I1119 02:44:36.425094  330644 provision.go:177] copyRemoteCerts
	I1119 02:44:36.425145  330644 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 02:44:36.425178  330644 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956139
	I1119 02:44:36.444152  330644 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/newest-cni-956139/id_rsa Username:docker}
	I1119 02:44:36.542494  330644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1119 02:44:36.560963  330644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1119 02:44:36.577617  330644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1119 02:44:36.594302  330644 provision.go:87] duration metric: took 235.073311ms to configureAuth
	I1119 02:44:36.594322  330644 ubuntu.go:206] setting minikube options for container-runtime
	I1119 02:44:36.594527  330644 config.go:182] Loaded profile config "newest-cni-956139": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:44:36.594625  330644 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956139
	I1119 02:44:36.612019  330644 main.go:143] libmachine: Using SSH client type: native
	I1119 02:44:36.612218  330644 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1119 02:44:36.612232  330644 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 02:44:36.879790  330644 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 02:44:36.879819  330644 machine.go:97] duration metric: took 989.804229ms to provisionDockerMachine
	I1119 02:44:36.879830  330644 client.go:176] duration metric: took 6.789240603s to LocalClient.Create
	I1119 02:44:36.879851  330644 start.go:167] duration metric: took 6.789312626s to libmachine.API.Create "newest-cni-956139"
	I1119 02:44:36.879860  330644 start.go:293] postStartSetup for "newest-cni-956139" (driver="docker")
	I1119 02:44:36.879872  330644 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 02:44:36.879933  330644 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 02:44:36.879968  330644 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956139
	I1119 02:44:36.898156  330644 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/newest-cni-956139/id_rsa Username:docker}
	I1119 02:44:36.993744  330644 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 02:44:36.997203  330644 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 02:44:36.997235  330644 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 02:44:36.997254  330644 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-11126/.minikube/addons for local assets ...
	I1119 02:44:36.997312  330644 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-11126/.minikube/files for local assets ...
	I1119 02:44:36.997404  330644 filesync.go:149] local asset: /home/jenkins/minikube-integration/21924-11126/.minikube/files/etc/ssl/certs/146342.pem -> 146342.pem in /etc/ssl/certs
	I1119 02:44:36.997536  330644 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 02:44:37.005305  330644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/files/etc/ssl/certs/146342.pem --> /etc/ssl/certs/146342.pem (1708 bytes)
	I1119 02:44:37.024142  330644 start.go:296] duration metric: took 144.272497ms for postStartSetup
	I1119 02:44:37.024490  330644 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-956139
	I1119 02:44:37.042142  330644 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/config.json ...
	I1119 02:44:37.042364  330644 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 02:44:37.042421  330644 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956139
	I1119 02:44:37.060279  330644 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/newest-cni-956139/id_rsa Username:docker}
	I1119 02:44:37.151155  330644 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 02:44:37.155487  330644 start.go:128] duration metric: took 7.068223226s to createHost
	I1119 02:44:37.155509  330644 start.go:83] releasing machines lock for "newest-cni-956139", held for 7.068353821s
	I1119 02:44:37.155567  330644 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-956139
	I1119 02:44:37.172738  330644 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 02:44:37.172750  330644 ssh_runner.go:195] Run: cat /version.json
	I1119 02:44:37.172802  330644 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956139
	I1119 02:44:37.172817  330644 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956139
	I1119 02:44:37.191403  330644 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/newest-cni-956139/id_rsa Username:docker}
	I1119 02:44:37.191761  330644 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/newest-cni-956139/id_rsa Username:docker}
	I1119 02:44:37.349781  330644 ssh_runner.go:195] Run: systemctl --version
	I1119 02:44:37.356447  330644 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 02:44:37.390971  330644 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 02:44:37.395386  330644 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 02:44:37.395452  330644 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 02:44:37.420966  330644 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1119 02:44:37.421000  330644 start.go:496] detecting cgroup driver to use...
	I1119 02:44:37.421031  330644 detect.go:190] detected "systemd" cgroup driver on host os
	I1119 02:44:37.421116  330644 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 02:44:37.437016  330644 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 02:44:37.448636  330644 docker.go:218] disabling cri-docker service (if available) ...
	I1119 02:44:37.448680  330644 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 02:44:37.464103  330644 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 02:44:37.483229  330644 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 02:44:37.569719  330644 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 02:44:37.663891  330644 docker.go:234] disabling docker service ...
	I1119 02:44:37.663946  330644 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 02:44:37.684672  330644 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 02:44:37.699707  330644 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 02:44:37.783938  330644 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 02:44:37.866466  330644 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 02:44:37.878906  330644 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 02:44:37.893148  330644 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 02:44:37.893200  330644 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:44:37.903765  330644 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1119 02:44:37.903825  330644 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:44:37.912380  330644 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:44:37.922240  330644 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:44:37.930944  330644 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 02:44:37.938625  330644 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:44:37.947066  330644 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:44:37.960171  330644 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:44:37.968261  330644 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 02:44:37.975267  330644 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 02:44:37.982398  330644 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:44:38.060067  330644 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 02:44:38.192960  330644 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 02:44:38.193022  330644 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 02:44:38.196763  330644 start.go:564] Will wait 60s for crictl version
	I1119 02:44:38.196824  330644 ssh_runner.go:195] Run: which crictl
	I1119 02:44:38.200161  330644 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 02:44:38.225001  330644 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1119 02:44:38.225065  330644 ssh_runner.go:195] Run: crio --version
	I1119 02:44:38.251944  330644 ssh_runner.go:195] Run: crio --version
	I1119 02:44:38.282138  330644 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1119 02:44:38.283487  330644 cli_runner.go:164] Run: docker network inspect newest-cni-956139 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 02:44:38.300312  330644 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1119 02:44:38.304280  330644 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 02:44:38.315573  330644 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1119 02:44:36.325065  321785 pod_ready.go:104] pod "coredns-66bc5c9577-bht2q" is not "Ready", error: <nil>
	W1119 02:44:38.824893  321785 pod_ready.go:104] pod "coredns-66bc5c9577-bht2q" is not "Ready", error: <nil>
	I1119 02:44:38.316650  330644 kubeadm.go:884] updating cluster {Name:newest-cni-956139 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-956139 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 02:44:38.316772  330644 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 02:44:38.316823  330644 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 02:44:38.347925  330644 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 02:44:38.347943  330644 crio.go:433] Images already preloaded, skipping extraction
	I1119 02:44:38.348024  330644 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 02:44:38.371370  330644 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 02:44:38.371386  330644 cache_images.go:86] Images are preloaded, skipping loading
	I1119 02:44:38.371393  330644 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1119 02:44:38.371489  330644 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-956139 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-956139 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 02:44:38.371568  330644 ssh_runner.go:195] Run: crio config
	I1119 02:44:38.414403  330644 cni.go:84] Creating CNI manager for ""
	I1119 02:44:38.414425  330644 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 02:44:38.414455  330644 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1119 02:44:38.414480  330644 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-956139 NodeName:newest-cni-956139 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 02:44:38.414596  330644 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-956139"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 02:44:38.414650  330644 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 02:44:38.422980  330644 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 02:44:38.423037  330644 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 02:44:38.430764  330644 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1119 02:44:38.442899  330644 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 02:44:38.457503  330644 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1119 02:44:38.470194  330644 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1119 02:44:38.473583  330644 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 02:44:38.482869  330644 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:44:38.562300  330644 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 02:44:38.585622  330644 certs.go:69] Setting up /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139 for IP: 192.168.76.2
	I1119 02:44:38.585639  330644 certs.go:195] generating shared ca certs ...
	I1119 02:44:38.585658  330644 certs.go:227] acquiring lock for ca certs: {Name:mkfa94aafd627ee4c1a185e05aa520339d3c22d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:44:38.585812  330644 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21924-11126/.minikube/ca.key
	I1119 02:44:38.585880  330644 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21924-11126/.minikube/proxy-client-ca.key
	I1119 02:44:38.585900  330644 certs.go:257] generating profile certs ...
	I1119 02:44:38.585973  330644 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/client.key
	I1119 02:44:38.585994  330644 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/client.crt with IP's: []
	I1119 02:44:38.886736  330644 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/client.crt ...
	I1119 02:44:38.886761  330644 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/client.crt: {Name:mkb981b48727217d5d544f8c1ece639a24196b8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:44:38.886914  330644 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/client.key ...
	I1119 02:44:38.886927  330644 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/client.key: {Name:mkf09d335927b94ecd83db709f24055ce131f9c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:44:38.887002  330644 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/apiserver.key.107d1e6d
	I1119 02:44:38.887016  330644 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/apiserver.crt.107d1e6d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1119 02:44:39.078031  330644 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/apiserver.crt.107d1e6d ...
	I1119 02:44:39.078059  330644 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/apiserver.crt.107d1e6d: {Name:mkcff50d0bd0e5de553650f0790abc33df1f3d40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:44:39.078203  330644 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/apiserver.key.107d1e6d ...
	I1119 02:44:39.078217  330644 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/apiserver.key.107d1e6d: {Name:mk332d91d4c4926805e4ae3abcbd91571604bef5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:44:39.078314  330644 certs.go:382] copying /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/apiserver.crt.107d1e6d -> /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/apiserver.crt
	I1119 02:44:39.078410  330644 certs.go:386] copying /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/apiserver.key.107d1e6d -> /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/apiserver.key
	I1119 02:44:39.078500  330644 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/proxy-client.key
	I1119 02:44:39.078517  330644 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/proxy-client.crt with IP's: []
	I1119 02:44:39.492473  330644 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/proxy-client.crt ...
	I1119 02:44:39.492501  330644 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/proxy-client.crt: {Name:mk2d2a0752005ddbf3ff7866b2d888f6c88921c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:44:39.492685  330644 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/proxy-client.key ...
	I1119 02:44:39.492708  330644 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/proxy-client.key: {Name:mk0676b22a9381558c3b1f8b4d9f9ded76cf6a58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:44:39.492943  330644 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/14634.pem (1338 bytes)
	W1119 02:44:39.492986  330644 certs.go:480] ignoring /home/jenkins/minikube-integration/21924-11126/.minikube/certs/14634_empty.pem, impossibly tiny 0 bytes
	I1119 02:44:39.493002  330644 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca-key.pem (1671 bytes)
	I1119 02:44:39.493035  330644 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem (1082 bytes)
	I1119 02:44:39.493063  330644 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/cert.pem (1123 bytes)
	I1119 02:44:39.493096  330644 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/key.pem (1675 bytes)
	I1119 02:44:39.493152  330644 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/files/etc/ssl/certs/146342.pem (1708 bytes)
	I1119 02:44:39.493921  330644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 02:44:39.511675  330644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1119 02:44:39.528321  330644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 02:44:39.545416  330644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1119 02:44:39.561752  330644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1119 02:44:39.578259  330644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1119 02:44:39.594332  330644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 02:44:39.610201  330644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1119 02:44:39.626532  330644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/files/etc/ssl/certs/146342.pem --> /usr/share/ca-certificates/146342.pem (1708 bytes)
	I1119 02:44:39.646920  330644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 02:44:39.663725  330644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/certs/14634.pem --> /usr/share/ca-certificates/14634.pem (1338 bytes)
	I1119 02:44:39.680824  330644 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 02:44:39.692613  330644 ssh_runner.go:195] Run: openssl version
	I1119 02:44:39.699229  330644 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/146342.pem && ln -fs /usr/share/ca-certificates/146342.pem /etc/ssl/certs/146342.pem"
	I1119 02:44:39.708084  330644 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/146342.pem
	I1119 02:44:39.711716  330644 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 02:02 /usr/share/ca-certificates/146342.pem
	I1119 02:44:39.711771  330644 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/146342.pem
	I1119 02:44:39.746645  330644 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/146342.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 02:44:39.754713  330644 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 02:44:39.762929  330644 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:44:39.766299  330644 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 01:56 /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:44:39.766335  330644 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:44:39.800570  330644 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 02:44:39.808541  330644 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14634.pem && ln -fs /usr/share/ca-certificates/14634.pem /etc/ssl/certs/14634.pem"
	I1119 02:44:39.816270  330644 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14634.pem
	I1119 02:44:39.819952  330644 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 02:02 /usr/share/ca-certificates/14634.pem
	I1119 02:44:39.819989  330644 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14634.pem
	I1119 02:44:39.854738  330644 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14634.pem /etc/ssl/certs/51391683.0"
	I1119 02:44:39.863275  330644 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 02:44:39.866811  330644 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1119 02:44:39.866866  330644 kubeadm.go:401] StartCluster: {Name:newest-cni-956139 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-956139 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:44:39.866959  330644 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 02:44:39.867032  330644 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 02:44:39.893234  330644 cri.go:89] found id: ""
	I1119 02:44:39.893298  330644 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 02:44:39.901084  330644 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1119 02:44:39.908779  330644 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1119 02:44:39.908820  330644 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1119 02:44:39.915918  330644 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1119 02:44:39.915956  330644 kubeadm.go:158] found existing configuration files:
	
	I1119 02:44:39.916000  330644 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1119 02:44:39.924150  330644 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1119 02:44:39.924192  330644 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1119 02:44:39.931134  330644 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1119 02:44:39.938135  330644 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1119 02:44:39.938182  330644 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1119 02:44:39.945082  330644 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1119 02:44:39.952377  330644 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1119 02:44:39.952425  330644 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1119 02:44:39.959861  330644 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1119 02:44:39.966757  330644 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1119 02:44:39.966801  330644 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1119 02:44:39.973926  330644 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1119 02:44:40.012094  330644 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1119 02:44:40.012170  330644 kubeadm.go:319] [preflight] Running pre-flight checks
	I1119 02:44:40.051599  330644 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1119 02:44:40.051753  330644 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1119 02:44:40.051826  330644 kubeadm.go:319] OS: Linux
	I1119 02:44:40.051888  330644 kubeadm.go:319] CGROUPS_CPU: enabled
	I1119 02:44:40.051939  330644 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1119 02:44:40.052007  330644 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1119 02:44:40.052083  330644 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1119 02:44:40.052163  330644 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1119 02:44:40.052233  330644 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1119 02:44:40.052284  330644 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1119 02:44:40.052344  330644 kubeadm.go:319] CGROUPS_IO: enabled
	I1119 02:44:40.110629  330644 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1119 02:44:40.110786  330644 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1119 02:44:40.110919  330644 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1119 02:44:40.118761  330644 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1119 02:44:37.420903  322722 pod_ready.go:104] pod "coredns-66bc5c9577-44bdr" is not "Ready", error: <nil>
	W1119 02:44:39.920505  322722 pod_ready.go:104] pod "coredns-66bc5c9577-44bdr" is not "Ready", error: <nil>
	I1119 02:44:40.823992  321785 pod_ready.go:94] pod "coredns-66bc5c9577-bht2q" is "Ready"
	I1119 02:44:40.824024  321785 pod_ready.go:86] duration metric: took 34.00468535s for pod "coredns-66bc5c9577-bht2q" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:40.826065  321785 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-167150" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:40.829510  321785 pod_ready.go:94] pod "etcd-default-k8s-diff-port-167150" is "Ready"
	I1119 02:44:40.829533  321785 pod_ready.go:86] duration metric: took 3.445845ms for pod "etcd-default-k8s-diff-port-167150" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:40.831135  321785 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-167150" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:40.834490  321785 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-167150" is "Ready"
	I1119 02:44:40.834508  321785 pod_ready.go:86] duration metric: took 3.353905ms for pod "kube-apiserver-default-k8s-diff-port-167150" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:40.836222  321785 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-167150" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:41.022776  321785 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-167150" is "Ready"
	I1119 02:44:41.022802  321785 pod_ready.go:86] duration metric: took 186.560827ms for pod "kube-controller-manager-default-k8s-diff-port-167150" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:41.222650  321785 pod_ready.go:83] waiting for pod "kube-proxy-8gl4n" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:41.623243  321785 pod_ready.go:94] pod "kube-proxy-8gl4n" is "Ready"
	I1119 02:44:41.623276  321785 pod_ready.go:86] duration metric: took 400.60046ms for pod "kube-proxy-8gl4n" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:41.823313  321785 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-167150" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:42.222639  321785 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-167150" is "Ready"
	I1119 02:44:42.222665  321785 pod_ready.go:86] duration metric: took 399.326737ms for pod "kube-scheduler-default-k8s-diff-port-167150" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:42.222675  321785 pod_ready.go:40] duration metric: took 35.410146964s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 02:44:42.265461  321785 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1119 02:44:42.267962  321785 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-167150" cluster and "default" namespace by default
	I1119 02:44:40.120572  330644 out.go:252]   - Generating certificates and keys ...
	I1119 02:44:40.120676  330644 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1119 02:44:40.120767  330644 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1119 02:44:40.285783  330644 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1119 02:44:40.596128  330644 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1119 02:44:40.775594  330644 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1119 02:44:40.856728  330644 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1119 02:44:41.447992  330644 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1119 02:44:41.448141  330644 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-956139] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1119 02:44:42.120936  330644 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1119 02:44:42.121139  330644 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-956139] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1119 02:44:42.400506  330644 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1119 02:44:42.544344  330644 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1119 02:44:42.820587  330644 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1119 02:44:42.820689  330644 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1119 02:44:42.995265  330644 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1119 02:44:43.162291  330644 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1119 02:44:43.196763  330644 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1119 02:44:43.556128  330644 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1119 02:44:43.787728  330644 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1119 02:44:43.788303  330644 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1119 02:44:43.792218  330644 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1119 02:44:43.793609  330644 out.go:252]   - Booting up control plane ...
	I1119 02:44:43.793714  330644 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1119 02:44:43.793818  330644 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1119 02:44:43.794447  330644 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1119 02:44:43.811365  330644 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1119 02:44:43.811606  330644 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1119 02:44:43.817701  330644 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1119 02:44:43.818010  330644 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1119 02:44:43.818083  330644 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1119 02:44:43.912675  330644 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1119 02:44:43.912849  330644 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	W1119 02:44:42.419894  322722 pod_ready.go:104] pod "coredns-66bc5c9577-44bdr" is not "Ready", error: <nil>
	W1119 02:44:44.921381  322722 pod_ready.go:104] pod "coredns-66bc5c9577-44bdr" is not "Ready", error: <nil>
	I1119 02:44:46.419827  322722 pod_ready.go:94] pod "coredns-66bc5c9577-44bdr" is "Ready"
	I1119 02:44:46.419857  322722 pod_ready.go:86] duration metric: took 38.00494675s for pod "coredns-66bc5c9577-44bdr" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:46.422128  322722 pod_ready.go:83] waiting for pod "etcd-no-preload-837474" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:46.425877  322722 pod_ready.go:94] pod "etcd-no-preload-837474" is "Ready"
	I1119 02:44:46.425901  322722 pod_ready.go:86] duration metric: took 3.744715ms for pod "etcd-no-preload-837474" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:46.427596  322722 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-837474" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:46.430915  322722 pod_ready.go:94] pod "kube-apiserver-no-preload-837474" is "Ready"
	I1119 02:44:46.430936  322722 pod_ready.go:86] duration metric: took 3.318971ms for pod "kube-apiserver-no-preload-837474" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:46.432827  322722 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-837474" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:46.619267  322722 pod_ready.go:94] pod "kube-controller-manager-no-preload-837474" is "Ready"
	I1119 02:44:46.619298  322722 pod_ready.go:86] duration metric: took 186.448054ms for pod "kube-controller-manager-no-preload-837474" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:46.819349  322722 pod_ready.go:83] waiting for pod "kube-proxy-hmxzk" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:47.219089  322722 pod_ready.go:94] pod "kube-proxy-hmxzk" is "Ready"
	I1119 02:44:47.219115  322722 pod_ready.go:86] duration metric: took 399.745795ms for pod "kube-proxy-hmxzk" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:47.418899  322722 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-837474" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:47.819293  322722 pod_ready.go:94] pod "kube-scheduler-no-preload-837474" is "Ready"
	I1119 02:44:47.819318  322722 pod_ready.go:86] duration metric: took 400.396392ms for pod "kube-scheduler-no-preload-837474" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:47.819332  322722 pod_ready.go:40] duration metric: took 39.409998426s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 02:44:47.882918  322722 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1119 02:44:47.884667  322722 out.go:179] * Done! kubectl is now configured to use "no-preload-837474" cluster and "default" namespace by default
	I1119 02:44:44.914267  330644 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001584412s
	I1119 02:44:44.919834  330644 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1119 02:44:44.919954  330644 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1119 02:44:44.920098  330644 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1119 02:44:44.920202  330644 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1119 02:44:46.082445  330644 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.162579737s
	I1119 02:44:46.762642  330644 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.842786839s
	I1119 02:44:48.421451  330644 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501654588s
	I1119 02:44:48.432989  330644 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1119 02:44:48.442965  330644 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1119 02:44:48.450246  330644 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1119 02:44:48.450564  330644 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-956139 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1119 02:44:48.457630  330644 kubeadm.go:319] [bootstrap-token] Using token: bpq1za.q7wy15mme3dprzfy
	I1119 02:44:48.458785  330644 out.go:252]   - Configuring RBAC rules ...
	I1119 02:44:48.458936  330644 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1119 02:44:48.461935  330644 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1119 02:44:48.466914  330644 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1119 02:44:48.469590  330644 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1119 02:44:48.472718  330644 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1119 02:44:48.475031  330644 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1119 02:44:48.827275  330644 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1119 02:44:49.241863  330644 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1119 02:44:49.827545  330644 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1119 02:44:49.828386  330644 kubeadm.go:319] 
	I1119 02:44:49.828472  330644 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1119 02:44:49.828485  330644 kubeadm.go:319] 
	I1119 02:44:49.828608  330644 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1119 02:44:49.828625  330644 kubeadm.go:319] 
	I1119 02:44:49.828650  330644 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1119 02:44:49.828731  330644 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1119 02:44:49.828818  330644 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1119 02:44:49.828832  330644 kubeadm.go:319] 
	I1119 02:44:49.828906  330644 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1119 02:44:49.828916  330644 kubeadm.go:319] 
	I1119 02:44:49.828980  330644 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1119 02:44:49.828990  330644 kubeadm.go:319] 
	I1119 02:44:49.829055  330644 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1119 02:44:49.829166  330644 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1119 02:44:49.829226  330644 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1119 02:44:49.829233  330644 kubeadm.go:319] 
	I1119 02:44:49.829341  330644 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1119 02:44:49.829450  330644 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1119 02:44:49.829464  330644 kubeadm.go:319] 
	I1119 02:44:49.829567  330644 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token bpq1za.q7wy15mme3dprzfy \
	I1119 02:44:49.829694  330644 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4165d7789cb5b41e2fdbecffedd9aece71a7c390b7f673c978e979d9f1b4ab32 \
	I1119 02:44:49.829727  330644 kubeadm.go:319] 	--control-plane 
	I1119 02:44:49.829737  330644 kubeadm.go:319] 
	I1119 02:44:49.829830  330644 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1119 02:44:49.829840  330644 kubeadm.go:319] 
	I1119 02:44:49.829940  330644 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token bpq1za.q7wy15mme3dprzfy \
	I1119 02:44:49.830063  330644 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4165d7789cb5b41e2fdbecffedd9aece71a7c390b7f673c978e979d9f1b4ab32 
	I1119 02:44:49.832633  330644 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1119 02:44:49.832729  330644 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1119 02:44:49.832752  330644 cni.go:84] Creating CNI manager for ""
	I1119 02:44:49.832761  330644 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 02:44:49.834994  330644 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1119 02:44:49.836244  330644 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1119 02:44:49.840560  330644 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1119 02:44:49.840576  330644 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1119 02:44:49.852577  330644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	
	
	==> CRI-O <==
	Nov 19 02:44:12 embed-certs-811173 crio[565]: time="2025-11-19T02:44:12.031646273Z" level=info msg="Started container" PID=1744 containerID=efe6e882130677370fb7e797e7ec99ccd1c65328b8552639fdbb47039cad64e5 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cvcns/dashboard-metrics-scraper id=209763d3-2bb2-4953-ba3a-d631212257d0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b375ed06ba298eb5b9adf64ed48dfd7a8d0e3b7fe91644d14ba680c1b574f9dc
	Nov 19 02:44:12 embed-certs-811173 crio[565]: time="2025-11-19T02:44:12.98129437Z" level=info msg="Removing container: c7a10a12fb73d562dedac0ef67ec7f7db10ae17c3abf93841264c0f9068cdd13" id=58223b52-559d-48c7-bbb3-da85eb853c4f name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 19 02:44:13 embed-certs-811173 crio[565]: time="2025-11-19T02:44:13.02261412Z" level=info msg="Removed container c7a10a12fb73d562dedac0ef67ec7f7db10ae17c3abf93841264c0f9068cdd13: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cvcns/dashboard-metrics-scraper" id=58223b52-559d-48c7-bbb3-da85eb853c4f name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 19 02:44:27 embed-certs-811173 crio[565]: time="2025-11-19T02:44:27.896251359Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=fe4c9d93-25ea-4773-9baf-efdb9ff88062 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 02:44:27 embed-certs-811173 crio[565]: time="2025-11-19T02:44:27.897211184Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=94d10610-8467-45d3-81c8-fa430d2ba178 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 02:44:27 embed-certs-811173 crio[565]: time="2025-11-19T02:44:27.898222988Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cvcns/dashboard-metrics-scraper" id=5e5a56c7-f360-4832-947f-7acdd3512229 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 02:44:27 embed-certs-811173 crio[565]: time="2025-11-19T02:44:27.898368984Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:44:27 embed-certs-811173 crio[565]: time="2025-11-19T02:44:27.903810069Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:44:27 embed-certs-811173 crio[565]: time="2025-11-19T02:44:27.904299695Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:44:27 embed-certs-811173 crio[565]: time="2025-11-19T02:44:27.935563695Z" level=info msg="Created container 684aff13b7f3d8b49f89e310e16d2708b26afd7ddd0b11590c24b5ee6fb5638d: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cvcns/dashboard-metrics-scraper" id=5e5a56c7-f360-4832-947f-7acdd3512229 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 02:44:27 embed-certs-811173 crio[565]: time="2025-11-19T02:44:27.936124461Z" level=info msg="Starting container: 684aff13b7f3d8b49f89e310e16d2708b26afd7ddd0b11590c24b5ee6fb5638d" id=05a8bad4-76a9-4442-9195-5750f16a8d3b name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 02:44:27 embed-certs-811173 crio[565]: time="2025-11-19T02:44:27.937934982Z" level=info msg="Started container" PID=1754 containerID=684aff13b7f3d8b49f89e310e16d2708b26afd7ddd0b11590c24b5ee6fb5638d description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cvcns/dashboard-metrics-scraper id=05a8bad4-76a9-4442-9195-5750f16a8d3b name=/runtime.v1.RuntimeService/StartContainer sandboxID=b375ed06ba298eb5b9adf64ed48dfd7a8d0e3b7fe91644d14ba680c1b574f9dc
	Nov 19 02:44:28 embed-certs-811173 crio[565]: time="2025-11-19T02:44:28.020883473Z" level=info msg="Removing container: efe6e882130677370fb7e797e7ec99ccd1c65328b8552639fdbb47039cad64e5" id=2ed5b5b5-06ac-43b0-b733-e215176c6c66 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 19 02:44:28 embed-certs-811173 crio[565]: time="2025-11-19T02:44:28.030021725Z" level=info msg="Removed container efe6e882130677370fb7e797e7ec99ccd1c65328b8552639fdbb47039cad64e5: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cvcns/dashboard-metrics-scraper" id=2ed5b5b5-06ac-43b0-b733-e215176c6c66 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 19 02:44:30 embed-certs-811173 crio[565]: time="2025-11-19T02:44:30.028770429Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=29f1893f-43ac-4d23-90d6-264e6b6ac1dd name=/runtime.v1.ImageService/ImageStatus
	Nov 19 02:44:30 embed-certs-811173 crio[565]: time="2025-11-19T02:44:30.029746598Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=ed009695-3dbd-4e8c-8bfd-314404daca4e name=/runtime.v1.ImageService/ImageStatus
	Nov 19 02:44:30 embed-certs-811173 crio[565]: time="2025-11-19T02:44:30.030964295Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=7a378216-5e08-4233-b6a0-1808bad3cd9f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 02:44:30 embed-certs-811173 crio[565]: time="2025-11-19T02:44:30.03135499Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:44:30 embed-certs-811173 crio[565]: time="2025-11-19T02:44:30.037741267Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:44:30 embed-certs-811173 crio[565]: time="2025-11-19T02:44:30.038384917Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/a0eb3d3406653b9d587d91bc187656d46e59d92ff943b22bc5dbda545d2e80ad/merged/etc/passwd: no such file or directory"
	Nov 19 02:44:30 embed-certs-811173 crio[565]: time="2025-11-19T02:44:30.038425463Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/a0eb3d3406653b9d587d91bc187656d46e59d92ff943b22bc5dbda545d2e80ad/merged/etc/group: no such file or directory"
	Nov 19 02:44:30 embed-certs-811173 crio[565]: time="2025-11-19T02:44:30.038885644Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:44:30 embed-certs-811173 crio[565]: time="2025-11-19T02:44:30.067638806Z" level=info msg="Created container 93226249c61ba23fe7678a2d4d31aafafa7d131e89b8004db43a8fef6f648222: kube-system/storage-provisioner/storage-provisioner" id=7a378216-5e08-4233-b6a0-1808bad3cd9f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 02:44:30 embed-certs-811173 crio[565]: time="2025-11-19T02:44:30.068248897Z" level=info msg="Starting container: 93226249c61ba23fe7678a2d4d31aafafa7d131e89b8004db43a8fef6f648222" id=1327fb43-e13a-4c95-a151-1296da9bc67f name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 02:44:30 embed-certs-811173 crio[565]: time="2025-11-19T02:44:30.070080558Z" level=info msg="Started container" PID=1768 containerID=93226249c61ba23fe7678a2d4d31aafafa7d131e89b8004db43a8fef6f648222 description=kube-system/storage-provisioner/storage-provisioner id=1327fb43-e13a-4c95-a151-1296da9bc67f name=/runtime.v1.RuntimeService/StartContainer sandboxID=462cb72b0d500ef960250d7128cef835c1d31c2bce8abe377be8880d71c5622f
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	93226249c61ba       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           20 seconds ago      Running             storage-provisioner         1                   462cb72b0d500       storage-provisioner                          kube-system
	684aff13b7f3d       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           22 seconds ago      Exited              dashboard-metrics-scraper   2                   b375ed06ba298       dashboard-metrics-scraper-6ffb444bf9-cvcns   kubernetes-dashboard
	cca02da00a467       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   41 seconds ago      Running             kubernetes-dashboard        0                   65a946e37ec45       kubernetes-dashboard-855c9754f9-22wsb        kubernetes-dashboard
	3579214a76a5b       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           51 seconds ago      Running             busybox                     1                   9920d7d5dbb75       busybox                                      default
	b2cdca1146afc       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           51 seconds ago      Running             coredns                     0                   abdee3af82196       coredns-66bc5c9577-6zqr2                     kube-system
	c54f63e61cd62       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           51 seconds ago      Exited              storage-provisioner         0                   462cb72b0d500       storage-provisioner                          kube-system
	aa517f916a402       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           51 seconds ago      Running             kindnet-cni                 0                   6a7d11717fc7d       kindnet-b2w9g                                kube-system
	b23c1eb2d226e       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           51 seconds ago      Running             kube-proxy                  0                   b4275d476f2c6       kube-proxy-s5bzz                             kube-system
	e0994ea947678       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           54 seconds ago      Running             kube-scheduler              0                   2cca5d5d90e11       kube-scheduler-embed-certs-811173            kube-system
	b9603bf135a48       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           54 seconds ago      Running             kube-apiserver              0                   cdec74f3d5b9f       kube-apiserver-embed-certs-811173            kube-system
	05974f8fe2ed9       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           54 seconds ago      Running             kube-controller-manager     0                   14e681817667b       kube-controller-manager-embed-certs-811173   kube-system
	706b2dbda2d38       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           54 seconds ago      Running             etcd                        0                   e080a952b1840       etcd-embed-certs-811173                      kube-system
	
	
	==> coredns [b2cdca1146afc3b622739f950e64864d26eba81ead7099474120baed23ee6f0e] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] 127.0.0.1:51116 - 40744 "HINFO IN 6888089067274774528.5535670819203862021. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.490073089s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-811173
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-811173
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ebd49efe8b7d44f89fc96e21beffcd64dfee8277
	                    minikube.k8s.io/name=embed-certs-811173
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T02_43_02_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 02:42:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-811173
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 02:44:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 02:44:28 +0000   Wed, 19 Nov 2025 02:42:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 02:44:28 +0000   Wed, 19 Nov 2025 02:42:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 02:44:28 +0000   Wed, 19 Nov 2025 02:42:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 02:44:28 +0000   Wed, 19 Nov 2025 02:43:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-811173
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863340Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863340Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf10fb2f940d419c1d138723691cfee8
	  System UUID:                c32197b9-e1d7-4c8f-bcdd-84def1c02350
	  Boot ID:                    b74e7f3d-91af-4c1b-82f6-1745dfd2e678
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 coredns-66bc5c9577-6zqr2                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     103s
	  kube-system                 etcd-embed-certs-811173                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         109s
	  kube-system                 kindnet-b2w9g                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      104s
	  kube-system                 kube-apiserver-embed-certs-811173             250m (3%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-controller-manager-embed-certs-811173    200m (2%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-proxy-s5bzz                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 kube-scheduler-embed-certs-811173             100m (1%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-cvcns    0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-22wsb         0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 103s                 kube-proxy       
	  Normal  Starting                 51s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  113s (x8 over 113s)  kubelet          Node embed-certs-811173 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    113s (x8 over 113s)  kubelet          Node embed-certs-811173 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     113s (x8 over 113s)  kubelet          Node embed-certs-811173 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     109s                 kubelet          Node embed-certs-811173 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  109s                 kubelet          Node embed-certs-811173 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    109s                 kubelet          Node embed-certs-811173 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 109s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           104s                 node-controller  Node embed-certs-811173 event: Registered Node embed-certs-811173 in Controller
	  Normal  NodeReady                92s                  kubelet          Node embed-certs-811173 status is now: NodeReady
	  Normal  Starting                 55s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  55s (x8 over 55s)    kubelet          Node embed-certs-811173 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    55s (x8 over 55s)    kubelet          Node embed-certs-811173 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     55s (x8 over 55s)    kubelet          Node embed-certs-811173 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           49s                  node-controller  Node embed-certs-811173 event: Registered Node embed-certs-811173 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: c6 2a 10 38 98 ce 7e af 25 93 50 8b 08 00
	[Nov19 02:40] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 69 19 13 d2 34 08 06
	[  +0.000303] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff de 82 c7 57 ef 49 08 06
	[Nov19 02:41] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 82 b6 eb cd 7e a7 08 06
	[  +0.001170] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 6a 20 a4 3b 82 10 08 06
	[ +12.842438] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 1e 35 97 65 5e 2e 08 06
	[  +4.187285] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ba 20 d4 3c 25 0c 08 06
	[ +19.742639] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6e e8 d1 08 45 d2 08 06
	[  +0.000327] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ba 20 d4 3c 25 0c 08 06
	[Nov19 02:42] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 2b 58 8a 05 dc 08 06
	[  +0.000340] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 b6 eb cd 7e a7 08 06
	[ +10.661146] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b6 1d bb 8d c6 48 08 06
	[  +0.000382] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 1e 35 97 65 5e 2e 08 06
	
	
	==> etcd [706b2dbda2d38ebc2ca3e61f6b17e96a3d75c375c204a2bcebbf88ede678a129] <==
	{"level":"warn","ts":"2025-11-19T02:43:57.471176Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:43:57.478793Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:43:57.484698Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:43:57.490676Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:43:57.497223Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:43:57.508510Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:43:57.514137Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:43:57.520773Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:43:57.526610Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:43:57.532445Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:43:57.538854Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:43:57.544840Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:43:57.552322Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:43:57.558849Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:43:57.564624Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:43:57.570908Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:43:57.576707Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:43:57.600151Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:43:57.606896Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:43:57.612702Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:43:57.661705Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51920","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-19T02:44:34.791880Z","caller":"traceutil/trace.go:172","msg":"trace[1717592132] linearizableReadLoop","detail":"{readStateIndex:693; appliedIndex:693; }","duration":"141.98362ms","start":"2025-11-19T02:44:34.649873Z","end":"2025-11-19T02:44:34.791856Z","steps":["trace[1717592132] 'read index received'  (duration: 141.977964ms)","trace[1717592132] 'applied index is now lower than readState.Index'  (duration: 4.779µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-19T02:44:34.828493Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"178.585318ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-embed-certs-811173\" limit:1 ","response":"range_response_count:1 size:5935"}
	{"level":"info","ts":"2025-11-19T02:44:34.828594Z","caller":"traceutil/trace.go:172","msg":"trace[128514129] range","detail":"{range_begin:/registry/pods/kube-system/etcd-embed-certs-811173; range_end:; response_count:1; response_revision:658; }","duration":"178.708646ms","start":"2025-11-19T02:44:34.649870Z","end":"2025-11-19T02:44:34.828578Z","steps":["trace[128514129] 'agreement among raft nodes before linearized reading'  (duration: 142.083486ms)","trace[128514129] 'range keys from in-memory index tree'  (duration: 36.371941ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-19T02:44:34.828704Z","caller":"traceutil/trace.go:172","msg":"trace[2081923050] transaction","detail":"{read_only:false; response_revision:659; number_of_response:1; }","duration":"179.294304ms","start":"2025-11-19T02:44:34.649398Z","end":"2025-11-19T02:44:34.828692Z","steps":["trace[2081923050] 'process raft request'  (duration: 142.529146ms)","trace[2081923050] 'compare'  (duration: 36.469881ms)"],"step_count":2}
	
	
	==> kernel <==
	 02:44:50 up  1:27,  0 user,  load average: 3.21, 3.28, 2.28
	Linux embed-certs-811173 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [aa517f916a402e98a0e426261b2e8d3cf00858150d67405fda67eaf37bb6e901] <==
	I1119 02:43:59.435969       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 02:43:59.436218       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1119 02:43:59.436377       1 main.go:148] setting mtu 1500 for CNI 
	I1119 02:43:59.436394       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 02:43:59.436419       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T02:43:59Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 02:43:59.690490       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 02:43:59.690551       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 02:43:59.690565       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 02:43:59.690690       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1119 02:44:00.090348       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1119 02:44:00.090389       1 metrics.go:72] Registering metrics
	I1119 02:44:00.092489       1 controller.go:711] "Syncing nftables rules"
	I1119 02:44:09.639643       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1119 02:44:09.639723       1 main.go:301] handling current node
	I1119 02:44:19.643506       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1119 02:44:19.643568       1 main.go:301] handling current node
	I1119 02:44:29.639513       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1119 02:44:29.639586       1 main.go:301] handling current node
	I1119 02:44:39.641552       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1119 02:44:39.641593       1 main.go:301] handling current node
	I1119 02:44:49.648523       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1119 02:44:49.648556       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b9603bf135a48a7fd7f1a7df00bc5ac2ca325854631a2e9109eebbe9c579c3fc] <==
	I1119 02:43:58.107248       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1119 02:43:58.107282       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1119 02:43:58.107294       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1119 02:43:58.107613       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1119 02:43:58.107307       1 aggregator.go:171] initial CRD sync complete...
	I1119 02:43:58.107702       1 autoregister_controller.go:144] Starting autoregister controller
	I1119 02:43:58.107709       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1119 02:43:58.107716       1 cache.go:39] Caches are synced for autoregister controller
	I1119 02:43:58.107366       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1119 02:43:58.110102       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1119 02:43:58.110136       1 policy_source.go:240] refreshing policies
	I1119 02:43:58.112729       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1119 02:43:58.147653       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 02:43:58.394849       1 controller.go:667] quota admission added evaluator for: namespaces
	I1119 02:43:58.421144       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1119 02:43:58.438167       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1119 02:43:58.444958       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1119 02:43:58.450940       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1119 02:43:58.484586       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.213.84"}
	I1119 02:43:58.494475       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.251.255"}
	I1119 02:43:59.010415       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 02:44:01.784685       1 controller.go:667] quota admission added evaluator for: endpoints
	I1119 02:44:01.838848       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1119 02:44:01.889499       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1119 02:44:01.889499       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [05974f8fe2ed9b3af8b149d271de0fd120542bca0e181f00cc290f0684748003] <==
	I1119 02:44:01.348911       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-811173"
	I1119 02:44:01.348954       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1119 02:44:01.350777       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1119 02:44:01.352037       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1119 02:44:01.356501       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1119 02:44:01.378917       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1119 02:44:01.378976       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 02:44:01.378991       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1119 02:44:01.379001       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1119 02:44:01.379001       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1119 02:44:01.379004       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1119 02:44:01.379025       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1119 02:44:01.379245       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1119 02:44:01.379249       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1119 02:44:01.379364       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1119 02:44:01.380521       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1119 02:44:01.382383       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1119 02:44:01.382491       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1119 02:44:01.385682       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1119 02:44:01.391200       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1119 02:44:01.392463       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 02:44:01.394604       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1119 02:44:01.398872       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1119 02:44:01.401369       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1119 02:44:01.442545       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [b23c1eb2d226e72679956724897a9a7086eebbdc3ef47d3921d153ed58b2e05d] <==
	I1119 02:43:59.299508       1 server_linux.go:53] "Using iptables proxy"
	I1119 02:43:59.364489       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 02:43:59.465279       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 02:43:59.465339       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1119 02:43:59.465417       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 02:43:59.483490       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 02:43:59.483555       1 server_linux.go:132] "Using iptables Proxier"
	I1119 02:43:59.488637       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 02:43:59.489025       1 server.go:527] "Version info" version="v1.34.1"
	I1119 02:43:59.489054       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 02:43:59.490459       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 02:43:59.490494       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 02:43:59.490496       1 config.go:200] "Starting service config controller"
	I1119 02:43:59.490518       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 02:43:59.490510       1 config.go:106] "Starting endpoint slice config controller"
	I1119 02:43:59.490548       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 02:43:59.490554       1 config.go:309] "Starting node config controller"
	I1119 02:43:59.490570       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 02:43:59.490586       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 02:43:59.590688       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1119 02:43:59.590707       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1119 02:43:59.590732       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [e0994ea94767873e5f7aa16af71ef5155fc15391a563da35948cadb1520f80bd] <==
	I1119 02:43:56.824322       1 serving.go:386] Generated self-signed cert in-memory
	W1119 02:43:58.021674       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1119 02:43:58.021707       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1119 02:43:58.021723       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1119 02:43:58.021732       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1119 02:43:58.070148       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1119 02:43:58.070243       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 02:43:58.073584       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1119 02:43:58.073768       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1119 02:43:58.073709       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 02:43:58.073885       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 02:43:58.174679       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 19 02:44:02 embed-certs-811173 kubelet[728]: I1119 02:44:02.189588     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/98385ede-e5ef-4e37-b563-0e45839e67f5-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-cvcns\" (UID: \"98385ede-e5ef-4e37-b563-0e45839e67f5\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cvcns"
	Nov 19 02:44:02 embed-certs-811173 kubelet[728]: I1119 02:44:02.189612     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbsq4\" (UniqueName: \"kubernetes.io/projected/98385ede-e5ef-4e37-b563-0e45839e67f5-kube-api-access-gbsq4\") pod \"dashboard-metrics-scraper-6ffb444bf9-cvcns\" (UID: \"98385ede-e5ef-4e37-b563-0e45839e67f5\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cvcns"
	Nov 19 02:44:04 embed-certs-811173 kubelet[728]: I1119 02:44:04.459919     728 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 19 02:44:08 embed-certs-811173 kubelet[728]: I1119 02:44:08.980478     728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-22wsb" podStartSLOduration=0.636456724 podStartE2EDuration="6.980454113s" podCreationTimestamp="2025-11-19 02:44:02 +0000 UTC" firstStartedPulling="2025-11-19 02:44:02.418417638 +0000 UTC m=+6.629049102" lastFinishedPulling="2025-11-19 02:44:08.762415036 +0000 UTC m=+12.973046491" observedRunningTime="2025-11-19 02:44:08.979070336 +0000 UTC m=+13.189701806" watchObservedRunningTime="2025-11-19 02:44:08.980454113 +0000 UTC m=+13.191085585"
	Nov 19 02:44:11 embed-certs-811173 kubelet[728]: I1119 02:44:11.974097     728 scope.go:117] "RemoveContainer" containerID="c7a10a12fb73d562dedac0ef67ec7f7db10ae17c3abf93841264c0f9068cdd13"
	Nov 19 02:44:12 embed-certs-811173 kubelet[728]: I1119 02:44:12.979079     728 scope.go:117] "RemoveContainer" containerID="c7a10a12fb73d562dedac0ef67ec7f7db10ae17c3abf93841264c0f9068cdd13"
	Nov 19 02:44:12 embed-certs-811173 kubelet[728]: I1119 02:44:12.979246     728 scope.go:117] "RemoveContainer" containerID="efe6e882130677370fb7e797e7ec99ccd1c65328b8552639fdbb47039cad64e5"
	Nov 19 02:44:12 embed-certs-811173 kubelet[728]: E1119 02:44:12.979467     728 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-cvcns_kubernetes-dashboard(98385ede-e5ef-4e37-b563-0e45839e67f5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cvcns" podUID="98385ede-e5ef-4e37-b563-0e45839e67f5"
	Nov 19 02:44:13 embed-certs-811173 kubelet[728]: I1119 02:44:13.985694     728 scope.go:117] "RemoveContainer" containerID="efe6e882130677370fb7e797e7ec99ccd1c65328b8552639fdbb47039cad64e5"
	Nov 19 02:44:13 embed-certs-811173 kubelet[728]: E1119 02:44:13.985879     728 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-cvcns_kubernetes-dashboard(98385ede-e5ef-4e37-b563-0e45839e67f5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cvcns" podUID="98385ede-e5ef-4e37-b563-0e45839e67f5"
	Nov 19 02:44:16 embed-certs-811173 kubelet[728]: I1119 02:44:16.086482     728 scope.go:117] "RemoveContainer" containerID="efe6e882130677370fb7e797e7ec99ccd1c65328b8552639fdbb47039cad64e5"
	Nov 19 02:44:16 embed-certs-811173 kubelet[728]: E1119 02:44:16.086744     728 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-cvcns_kubernetes-dashboard(98385ede-e5ef-4e37-b563-0e45839e67f5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cvcns" podUID="98385ede-e5ef-4e37-b563-0e45839e67f5"
	Nov 19 02:44:27 embed-certs-811173 kubelet[728]: I1119 02:44:27.895787     728 scope.go:117] "RemoveContainer" containerID="efe6e882130677370fb7e797e7ec99ccd1c65328b8552639fdbb47039cad64e5"
	Nov 19 02:44:28 embed-certs-811173 kubelet[728]: I1119 02:44:28.019607     728 scope.go:117] "RemoveContainer" containerID="efe6e882130677370fb7e797e7ec99ccd1c65328b8552639fdbb47039cad64e5"
	Nov 19 02:44:28 embed-certs-811173 kubelet[728]: I1119 02:44:28.019850     728 scope.go:117] "RemoveContainer" containerID="684aff13b7f3d8b49f89e310e16d2708b26afd7ddd0b11590c24b5ee6fb5638d"
	Nov 19 02:44:28 embed-certs-811173 kubelet[728]: E1119 02:44:28.020042     728 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-cvcns_kubernetes-dashboard(98385ede-e5ef-4e37-b563-0e45839e67f5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cvcns" podUID="98385ede-e5ef-4e37-b563-0e45839e67f5"
	Nov 19 02:44:30 embed-certs-811173 kubelet[728]: I1119 02:44:30.028179     728 scope.go:117] "RemoveContainer" containerID="c54f63e61cd62e1b142da359df33d91caf60f07fa0b8e3232b02d81672c144e4"
	Nov 19 02:44:36 embed-certs-811173 kubelet[728]: I1119 02:44:36.086509     728 scope.go:117] "RemoveContainer" containerID="684aff13b7f3d8b49f89e310e16d2708b26afd7ddd0b11590c24b5ee6fb5638d"
	Nov 19 02:44:36 embed-certs-811173 kubelet[728]: E1119 02:44:36.086693     728 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-cvcns_kubernetes-dashboard(98385ede-e5ef-4e37-b563-0e45839e67f5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cvcns" podUID="98385ede-e5ef-4e37-b563-0e45839e67f5"
	Nov 19 02:44:46 embed-certs-811173 kubelet[728]: I1119 02:44:46.895778     728 scope.go:117] "RemoveContainer" containerID="684aff13b7f3d8b49f89e310e16d2708b26afd7ddd0b11590c24b5ee6fb5638d"
	Nov 19 02:44:46 embed-certs-811173 kubelet[728]: E1119 02:44:46.895974     728 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-cvcns_kubernetes-dashboard(98385ede-e5ef-4e37-b563-0e45839e67f5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cvcns" podUID="98385ede-e5ef-4e37-b563-0e45839e67f5"
	Nov 19 02:44:48 embed-certs-811173 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 19 02:44:48 embed-certs-811173 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 19 02:44:48 embed-certs-811173 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 19 02:44:48 embed-certs-811173 systemd[1]: kubelet.service: Consumed 1.629s CPU time.
	
	
	==> kubernetes-dashboard [cca02da00a4676afe504adf5be3a8411759a7aeae1cf8b33d87c2969c8b35ee0] <==
	2025/11/19 02:44:08 Using namespace: kubernetes-dashboard
	2025/11/19 02:44:08 Using in-cluster config to connect to apiserver
	2025/11/19 02:44:08 Using secret token for csrf signing
	2025/11/19 02:44:08 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/19 02:44:08 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/19 02:44:08 Successful initial request to the apiserver, version: v1.34.1
	2025/11/19 02:44:08 Generating JWE encryption key
	2025/11/19 02:44:08 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/19 02:44:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/19 02:44:09 Initializing JWE encryption key from synchronized object
	2025/11/19 02:44:09 Creating in-cluster Sidecar client
	2025/11/19 02:44:09 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/19 02:44:09 Serving insecurely on HTTP port: 9090
	2025/11/19 02:44:39 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/19 02:44:08 Starting overwatch
	
	
	==> storage-provisioner [93226249c61ba23fe7678a2d4d31aafafa7d131e89b8004db43a8fef6f648222] <==
	I1119 02:44:30.082948       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1119 02:44:30.092147       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1119 02:44:30.092196       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1119 02:44:30.094327       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:44:33.548566       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:44:37.809569       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:44:41.407989       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:44:44.461315       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:44:47.483832       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:44:47.488176       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 02:44:47.488341       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1119 02:44:47.488484       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d6ae127c-c859-4ddd-8bc9-6532cea887ea", APIVersion:"v1", ResourceVersion:"666", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-811173_c9984f80-c805-4726-b8ca-3bac7548e455 became leader
	I1119 02:44:47.488539       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-811173_c9984f80-c805-4726-b8ca-3bac7548e455!
	W1119 02:44:47.490475       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:44:47.493942       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 02:44:47.589693       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-811173_c9984f80-c805-4726-b8ca-3bac7548e455!
	W1119 02:44:49.497727       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:44:49.502085       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [c54f63e61cd62e1b142da359df33d91caf60f07fa0b8e3232b02d81672c144e4] <==
	I1119 02:43:59.268652       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1119 02:44:29.271824       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-811173 -n embed-certs-811173
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-811173 -n embed-certs-811173: exit status 2 (331.805307ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-811173 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-811173
helpers_test.go:243: (dbg) docker inspect embed-certs-811173:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f59ac2b4a856949cb2e1ed6807a501c825bed5125b05f7ff483858655e957668",
	        "Created": "2025-11-19T02:42:39.275670124Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 320909,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T02:43:49.479375137Z",
	            "FinishedAt": "2025-11-19T02:43:48.652309218Z"
	        },
	        "Image": "sha256:a368e3d71517ce17114afb6c9921965419df972dd0e2d32a9973a8946f0910a3",
	        "ResolvConfPath": "/var/lib/docker/containers/f59ac2b4a856949cb2e1ed6807a501c825bed5125b05f7ff483858655e957668/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f59ac2b4a856949cb2e1ed6807a501c825bed5125b05f7ff483858655e957668/hostname",
	        "HostsPath": "/var/lib/docker/containers/f59ac2b4a856949cb2e1ed6807a501c825bed5125b05f7ff483858655e957668/hosts",
	        "LogPath": "/var/lib/docker/containers/f59ac2b4a856949cb2e1ed6807a501c825bed5125b05f7ff483858655e957668/f59ac2b4a856949cb2e1ed6807a501c825bed5125b05f7ff483858655e957668-json.log",
	        "Name": "/embed-certs-811173",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-811173:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-811173",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f59ac2b4a856949cb2e1ed6807a501c825bed5125b05f7ff483858655e957668",
	                "LowerDir": "/var/lib/docker/overlay2/07d90f0d6a437038a8f0a347be8e1b31b31817fee59231702439e2ea962044d8-init/diff:/var/lib/docker/overlay2/10dcbd04dd53463a9d8dc947ef22df9efa1b3a4d07707317c3869fa39d5b22a7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/07d90f0d6a437038a8f0a347be8e1b31b31817fee59231702439e2ea962044d8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/07d90f0d6a437038a8f0a347be8e1b31b31817fee59231702439e2ea962044d8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/07d90f0d6a437038a8f0a347be8e1b31b31817fee59231702439e2ea962044d8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-811173",
	                "Source": "/var/lib/docker/volumes/embed-certs-811173/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-811173",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-811173",
	                "name.minikube.sigs.k8s.io": "embed-certs-811173",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "c973d9ddf5ca4582c6d9a3e3426352bb27d0718cc9c17a004d4d318c5d5344b0",
	            "SandboxKey": "/var/run/docker/netns/c973d9ddf5ca",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33113"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33114"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33117"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33115"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33116"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-811173": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3129c4b605594e1d463b2d85e5ed79f025bb6cff93cf80cdce990db8936b5a9c",
	                    "EndpointID": "9a5b314c12870073b566c60829f8bb3d30b754e8bab543728cfe758761427d3f",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "7a:3f:06:cd:e5:7d",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-811173",
	                        "f59ac2b4a856"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-811173 -n embed-certs-811173
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-811173 -n embed-certs-811173: exit status 2 (326.721645ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-811173 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-811173 logs -n 25: (1.052611301s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p disable-driver-mounts-682232                                                                                                                                                                                                               │ disable-driver-mounts-682232 │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │ 19 Nov 25 02:42 UTC │
	│ start   │ -p default-k8s-diff-port-167150 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-167150 │ jenkins │ v1.37.0 │ 19 Nov 25 02:42 UTC │ 19 Nov 25 02:43 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-987573 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-987573       │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │                     │
	│ stop    │ -p old-k8s-version-987573 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-987573       │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:43 UTC │
	│ addons  │ enable metrics-server -p embed-certs-811173 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-811173           │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │                     │
	│ stop    │ -p embed-certs-811173 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-811173           │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:43 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-167150 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-167150 │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-167150 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-167150 │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:43 UTC │
	│ addons  │ enable metrics-server -p no-preload-837474 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-837474            │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │                     │
	│ addons  │ enable dashboard -p old-k8s-version-987573 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-987573       │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:43 UTC │
	│ start   │ -p old-k8s-version-987573 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-987573       │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:44 UTC │
	│ stop    │ -p no-preload-837474 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-837474            │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:43 UTC │
	│ addons  │ enable dashboard -p embed-certs-811173 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-811173           │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:43 UTC │
	│ start   │ -p embed-certs-811173 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-811173           │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:44 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-167150 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-167150 │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:43 UTC │
	│ start   │ -p default-k8s-diff-port-167150 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-167150 │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:44 UTC │
	│ addons  │ enable dashboard -p no-preload-837474 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-837474            │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:43 UTC │
	│ start   │ -p no-preload-837474 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-837474            │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:44 UTC │
	│ image   │ old-k8s-version-987573 image list --format=json                                                                                                                                                                                               │ old-k8s-version-987573       │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │ 19 Nov 25 02:44 UTC │
	│ pause   │ -p old-k8s-version-987573 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-987573       │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │                     │
	│ delete  │ -p old-k8s-version-987573                                                                                                                                                                                                                     │ old-k8s-version-987573       │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │ 19 Nov 25 02:44 UTC │
	│ delete  │ -p old-k8s-version-987573                                                                                                                                                                                                                     │ old-k8s-version-987573       │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │ 19 Nov 25 02:44 UTC │
	│ start   │ -p newest-cni-956139 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-956139            │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │                     │
	│ image   │ embed-certs-811173 image list --format=json                                                                                                                                                                                                   │ embed-certs-811173           │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │ 19 Nov 25 02:44 UTC │
	│ pause   │ -p embed-certs-811173 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-811173           │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 02:44:29
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 02:44:29.891671  330644 out.go:360] Setting OutFile to fd 1 ...
	I1119 02:44:29.891773  330644 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:44:29.891782  330644 out.go:374] Setting ErrFile to fd 2...
	I1119 02:44:29.891786  330644 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:44:29.892013  330644 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-11126/.minikube/bin
	I1119 02:44:29.892489  330644 out.go:368] Setting JSON to false
	I1119 02:44:29.893932  330644 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5217,"bootTime":1763515053,"procs":351,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 02:44:29.894009  330644 start.go:143] virtualization: kvm guest
	I1119 02:44:29.896106  330644 out.go:179] * [newest-cni-956139] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1119 02:44:29.897349  330644 out.go:179]   - MINIKUBE_LOCATION=21924
	I1119 02:44:29.897381  330644 notify.go:221] Checking for updates...
	I1119 02:44:29.899649  330644 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 02:44:29.900639  330644 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21924-11126/kubeconfig
	I1119 02:44:29.901703  330644 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-11126/.minikube
	I1119 02:44:29.902810  330644 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1119 02:44:29.903920  330644 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 02:44:29.905455  330644 config.go:182] Loaded profile config "default-k8s-diff-port-167150": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:44:29.905620  330644 config.go:182] Loaded profile config "embed-certs-811173": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:44:29.905735  330644 config.go:182] Loaded profile config "no-preload-837474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:44:29.905864  330644 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 02:44:29.930269  330644 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1119 02:44:29.930390  330644 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 02:44:29.990852  330644 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-19 02:44:29.980528215 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652060160 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 02:44:29.990974  330644 docker.go:319] overlay module found
	I1119 02:44:29.992680  330644 out.go:179] * Using the docker driver based on user configuration
	I1119 02:44:29.993882  330644 start.go:309] selected driver: docker
	I1119 02:44:29.993897  330644 start.go:930] validating driver "docker" against <nil>
	I1119 02:44:29.993908  330644 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 02:44:29.994485  330644 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 02:44:30.055174  330644 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-19 02:44:30.045301349 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652060160 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 02:44:30.055367  330644 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1119 02:44:30.055398  330644 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1119 02:44:30.055690  330644 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1119 02:44:30.057878  330644 out.go:179] * Using Docker driver with root privileges
	I1119 02:44:30.059068  330644 cni.go:84] Creating CNI manager for ""
	I1119 02:44:30.059130  330644 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 02:44:30.059141  330644 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1119 02:44:30.059196  330644 start.go:353] cluster config:
	{Name:newest-cni-956139 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-956139 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:44:30.060543  330644 out.go:179] * Starting "newest-cni-956139" primary control-plane node in "newest-cni-956139" cluster
	I1119 02:44:30.061681  330644 cache.go:134] Beginning downloading kic base image for docker with crio
	I1119 02:44:30.062975  330644 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1119 02:44:30.064114  330644 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 02:44:30.064143  330644 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21924-11126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1119 02:44:30.064167  330644 cache.go:65] Caching tarball of preloaded images
	I1119 02:44:30.064199  330644 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1119 02:44:30.064251  330644 preload.go:238] Found /home/jenkins/minikube-integration/21924-11126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1119 02:44:30.064266  330644 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1119 02:44:30.064364  330644 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/config.json ...
	I1119 02:44:30.064387  330644 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/config.json: {Name:mk5f6a602a7486c803f28ee981bc4fb72f30089f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:44:30.086997  330644 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1119 02:44:30.087020  330644 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1119 02:44:30.087033  330644 cache.go:243] Successfully downloaded all kic artifacts
	I1119 02:44:30.087059  330644 start.go:360] acquireMachinesLock for newest-cni-956139: {Name:mk15a132b2574a22e8a886ba5601ed901f63d00c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 02:44:30.087146  330644 start.go:364] duration metric: took 69.531µs to acquireMachinesLock for "newest-cni-956139"
	I1119 02:44:30.087169  330644 start.go:93] Provisioning new machine with config: &{Name:newest-cni-956139 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-956139 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 02:44:30.087250  330644 start.go:125] createHost starting for "" (driver="docker")
	W1119 02:44:25.920223  322722 pod_ready.go:104] pod "coredns-66bc5c9577-44bdr" is not "Ready", error: <nil>
	W1119 02:44:28.420250  322722 pod_ready.go:104] pod "coredns-66bc5c9577-44bdr" is not "Ready", error: <nil>
	W1119 02:44:30.420774  322722 pod_ready.go:104] pod "coredns-66bc5c9577-44bdr" is not "Ready", error: <nil>
	W1119 02:44:29.634283  320707 pod_ready.go:104] pod "coredns-66bc5c9577-6zqr2" is not "Ready", error: <nil>
	W1119 02:44:31.634456  320707 pod_ready.go:104] pod "coredns-66bc5c9577-6zqr2" is not "Ready", error: <nil>
	W1119 02:44:34.134853  320707 pod_ready.go:104] pod "coredns-66bc5c9577-6zqr2" is not "Ready", error: <nil>
	W1119 02:44:29.824614  321785 pod_ready.go:104] pod "coredns-66bc5c9577-bht2q" is not "Ready", error: <nil>
	W1119 02:44:31.825210  321785 pod_ready.go:104] pod "coredns-66bc5c9577-bht2q" is not "Ready", error: <nil>
	W1119 02:44:33.861933  321785 pod_ready.go:104] pod "coredns-66bc5c9577-bht2q" is not "Ready", error: <nil>
	I1119 02:44:30.090250  330644 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1119 02:44:30.090540  330644 start.go:159] libmachine.API.Create for "newest-cni-956139" (driver="docker")
	I1119 02:44:30.090580  330644 client.go:173] LocalClient.Create starting
	I1119 02:44:30.090711  330644 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem
	I1119 02:44:30.090762  330644 main.go:143] libmachine: Decoding PEM data...
	I1119 02:44:30.090788  330644 main.go:143] libmachine: Parsing certificate...
	I1119 02:44:30.090868  330644 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21924-11126/.minikube/certs/cert.pem
	I1119 02:44:30.090897  330644 main.go:143] libmachine: Decoding PEM data...
	I1119 02:44:30.090911  330644 main.go:143] libmachine: Parsing certificate...
	I1119 02:44:30.091311  330644 cli_runner.go:164] Run: docker network inspect newest-cni-956139 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1119 02:44:30.108825  330644 cli_runner.go:211] docker network inspect newest-cni-956139 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1119 02:44:30.108874  330644 network_create.go:284] running [docker network inspect newest-cni-956139] to gather additional debugging logs...
	I1119 02:44:30.108888  330644 cli_runner.go:164] Run: docker network inspect newest-cni-956139
	W1119 02:44:30.125848  330644 cli_runner.go:211] docker network inspect newest-cni-956139 returned with exit code 1
	I1119 02:44:30.125873  330644 network_create.go:287] error running [docker network inspect newest-cni-956139]: docker network inspect newest-cni-956139: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-956139 not found
	I1119 02:44:30.125887  330644 network_create.go:289] output of [docker network inspect newest-cni-956139]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-956139 not found
	
	** /stderr **
	I1119 02:44:30.126008  330644 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 02:44:30.145372  330644 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-84ce244e4c23 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:16:55:7c:db:e3:4e} reservation:<nil>}
	I1119 02:44:30.146006  330644 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-70e7d73f86d8 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:8a:64:3f:46:8e:7a} reservation:<nil>}
	I1119 02:44:30.146778  330644 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-d7ef477b5a23 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:fa:eb:22:b3:62:92} reservation:<nil>}
	I1119 02:44:30.147612  330644 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ee2320}
	I1119 02:44:30.147633  330644 network_create.go:124] attempt to create docker network newest-cni-956139 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1119 02:44:30.147689  330644 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-956139 newest-cni-956139
	I1119 02:44:30.194747  330644 network_create.go:108] docker network newest-cni-956139 192.168.76.0/24 created
	I1119 02:44:30.194772  330644 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-956139" container
	I1119 02:44:30.194838  330644 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1119 02:44:30.212175  330644 cli_runner.go:164] Run: docker volume create newest-cni-956139 --label name.minikube.sigs.k8s.io=newest-cni-956139 --label created_by.minikube.sigs.k8s.io=true
	I1119 02:44:30.229588  330644 oci.go:103] Successfully created a docker volume newest-cni-956139
	I1119 02:44:30.229664  330644 cli_runner.go:164] Run: docker run --rm --name newest-cni-956139-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-956139 --entrypoint /usr/bin/test -v newest-cni-956139:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -d /var/lib
	I1119 02:44:30.612069  330644 oci.go:107] Successfully prepared a docker volume newest-cni-956139
	I1119 02:44:30.612124  330644 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 02:44:30.612132  330644 kic.go:194] Starting extracting preloaded images to volume ...
	I1119 02:44:30.612187  330644 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21924-11126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-956139:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir
	W1119 02:44:32.919409  322722 pod_ready.go:104] pod "coredns-66bc5c9577-44bdr" is not "Ready", error: <nil>
	W1119 02:44:34.920166  322722 pod_ready.go:104] pod "coredns-66bc5c9577-44bdr" is not "Ready", error: <nil>
	I1119 02:44:34.646141  320707 pod_ready.go:94] pod "coredns-66bc5c9577-6zqr2" is "Ready"
	I1119 02:44:34.646170  320707 pod_ready.go:86] duration metric: took 35.016957338s for pod "coredns-66bc5c9577-6zqr2" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:34.648819  320707 pod_ready.go:83] waiting for pod "etcd-embed-certs-811173" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:34.831828  320707 pod_ready.go:94] pod "etcd-embed-certs-811173" is "Ready"
	I1119 02:44:34.831852  320707 pod_ready.go:86] duration metric: took 183.006168ms for pod "etcd-embed-certs-811173" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:34.834239  320707 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-811173" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:34.837643  320707 pod_ready.go:94] pod "kube-apiserver-embed-certs-811173" is "Ready"
	I1119 02:44:34.837663  320707 pod_ready.go:86] duration metric: took 3.400351ms for pod "kube-apiserver-embed-certs-811173" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:34.839329  320707 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-811173" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:34.842652  320707 pod_ready.go:94] pod "kube-controller-manager-embed-certs-811173" is "Ready"
	I1119 02:44:34.842670  320707 pod_ready.go:86] duration metric: took 3.319388ms for pod "kube-controller-manager-embed-certs-811173" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:35.032627  320707 pod_ready.go:83] waiting for pod "kube-proxy-s5bzz" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:35.432934  320707 pod_ready.go:94] pod "kube-proxy-s5bzz" is "Ready"
	I1119 02:44:35.432959  320707 pod_ready.go:86] duration metric: took 400.306652ms for pod "kube-proxy-s5bzz" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:35.633961  320707 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-811173" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:36.032469  320707 pod_ready.go:94] pod "kube-scheduler-embed-certs-811173" is "Ready"
	I1119 02:44:36.032499  320707 pod_ready.go:86] duration metric: took 398.480495ms for pod "kube-scheduler-embed-certs-811173" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:36.032511  320707 pod_ready.go:40] duration metric: took 36.406499301s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 02:44:36.080404  320707 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1119 02:44:36.082160  320707 out.go:179] * Done! kubectl is now configured to use "embed-certs-811173" cluster and "default" namespace by default
	I1119 02:44:34.960079  330644 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21924-11126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-956139:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir: (4.347852696s)
	I1119 02:44:34.960108  330644 kic.go:203] duration metric: took 4.347972861s to extract preloaded images to volume ...
	W1119 02:44:34.960206  330644 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1119 02:44:34.960254  330644 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1119 02:44:34.960300  330644 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1119 02:44:35.014083  330644 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-956139 --name newest-cni-956139 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-956139 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-956139 --network newest-cni-956139 --ip 192.168.76.2 --volume newest-cni-956139:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a
	I1119 02:44:35.325493  330644 cli_runner.go:164] Run: docker container inspect newest-cni-956139 --format={{.State.Running}}
	I1119 02:44:35.343669  330644 cli_runner.go:164] Run: docker container inspect newest-cni-956139 --format={{.State.Status}}
	I1119 02:44:35.361759  330644 cli_runner.go:164] Run: docker exec newest-cni-956139 stat /var/lib/dpkg/alternatives/iptables
	I1119 02:44:35.406925  330644 oci.go:144] the created container "newest-cni-956139" has a running status.
	I1119 02:44:35.406959  330644 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21924-11126/.minikube/machines/newest-cni-956139/id_rsa...
	I1119 02:44:35.779267  330644 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21924-11126/.minikube/machines/newest-cni-956139/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1119 02:44:35.805615  330644 cli_runner.go:164] Run: docker container inspect newest-cni-956139 --format={{.State.Status}}
	I1119 02:44:35.826512  330644 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1119 02:44:35.826530  330644 kic_runner.go:114] Args: [docker exec --privileged newest-cni-956139 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1119 02:44:35.871319  330644 cli_runner.go:164] Run: docker container inspect newest-cni-956139 --format={{.State.Status}}
	I1119 02:44:35.889991  330644 machine.go:94] provisionDockerMachine start ...
	I1119 02:44:35.890097  330644 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956139
	I1119 02:44:35.909789  330644 main.go:143] libmachine: Using SSH client type: native
	I1119 02:44:35.910136  330644 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1119 02:44:35.910158  330644 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 02:44:36.043778  330644 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-956139
	
	I1119 02:44:36.043805  330644 ubuntu.go:182] provisioning hostname "newest-cni-956139"
	I1119 02:44:36.043885  330644 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956139
	I1119 02:44:36.065697  330644 main.go:143] libmachine: Using SSH client type: native
	I1119 02:44:36.065904  330644 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1119 02:44:36.065918  330644 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-956139 && echo "newest-cni-956139" | sudo tee /etc/hostname
	I1119 02:44:36.211004  330644 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-956139
	
	I1119 02:44:36.211088  330644 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956139
	I1119 02:44:36.229392  330644 main.go:143] libmachine: Using SSH client type: native
	I1119 02:44:36.229616  330644 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1119 02:44:36.229635  330644 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-956139' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-956139/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-956139' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 02:44:36.359138  330644 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 02:44:36.359177  330644 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21924-11126/.minikube CaCertPath:/home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21924-11126/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21924-11126/.minikube}
	I1119 02:44:36.359210  330644 ubuntu.go:190] setting up certificates
	I1119 02:44:36.359219  330644 provision.go:84] configureAuth start
	I1119 02:44:36.359262  330644 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-956139
	I1119 02:44:36.381048  330644 provision.go:143] copyHostCerts
	I1119 02:44:36.381118  330644 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-11126/.minikube/key.pem, removing ...
	I1119 02:44:36.381134  330644 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-11126/.minikube/key.pem
	I1119 02:44:36.381241  330644 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21924-11126/.minikube/key.pem (1675 bytes)
	I1119 02:44:36.381393  330644 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-11126/.minikube/ca.pem, removing ...
	I1119 02:44:36.381407  330644 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-11126/.minikube/ca.pem
	I1119 02:44:36.381473  330644 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21924-11126/.minikube/ca.pem (1082 bytes)
	I1119 02:44:36.381598  330644 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-11126/.minikube/cert.pem, removing ...
	I1119 02:44:36.381613  330644 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-11126/.minikube/cert.pem
	I1119 02:44:36.381659  330644 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21924-11126/.minikube/cert.pem (1123 bytes)
	I1119 02:44:36.381762  330644 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21924-11126/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca-key.pem org=jenkins.newest-cni-956139 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-956139]
	I1119 02:44:36.425094  330644 provision.go:177] copyRemoteCerts
	I1119 02:44:36.425145  330644 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 02:44:36.425178  330644 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956139
	I1119 02:44:36.444152  330644 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/newest-cni-956139/id_rsa Username:docker}
	I1119 02:44:36.542494  330644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1119 02:44:36.560963  330644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1119 02:44:36.577617  330644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1119 02:44:36.594302  330644 provision.go:87] duration metric: took 235.073311ms to configureAuth
	I1119 02:44:36.594322  330644 ubuntu.go:206] setting minikube options for container-runtime
	I1119 02:44:36.594527  330644 config.go:182] Loaded profile config "newest-cni-956139": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:44:36.594625  330644 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956139
	I1119 02:44:36.612019  330644 main.go:143] libmachine: Using SSH client type: native
	I1119 02:44:36.612218  330644 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1119 02:44:36.612232  330644 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 02:44:36.879790  330644 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 02:44:36.879819  330644 machine.go:97] duration metric: took 989.804229ms to provisionDockerMachine
	I1119 02:44:36.879830  330644 client.go:176] duration metric: took 6.789240603s to LocalClient.Create
	I1119 02:44:36.879851  330644 start.go:167] duration metric: took 6.789312626s to libmachine.API.Create "newest-cni-956139"
	I1119 02:44:36.879860  330644 start.go:293] postStartSetup for "newest-cni-956139" (driver="docker")
	I1119 02:44:36.879872  330644 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 02:44:36.879933  330644 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 02:44:36.879968  330644 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956139
	I1119 02:44:36.898156  330644 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/newest-cni-956139/id_rsa Username:docker}
	I1119 02:44:36.993744  330644 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 02:44:36.997203  330644 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 02:44:36.997235  330644 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 02:44:36.997254  330644 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-11126/.minikube/addons for local assets ...
	I1119 02:44:36.997312  330644 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-11126/.minikube/files for local assets ...
	I1119 02:44:36.997404  330644 filesync.go:149] local asset: /home/jenkins/minikube-integration/21924-11126/.minikube/files/etc/ssl/certs/146342.pem -> 146342.pem in /etc/ssl/certs
	I1119 02:44:36.997536  330644 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 02:44:37.005305  330644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/files/etc/ssl/certs/146342.pem --> /etc/ssl/certs/146342.pem (1708 bytes)
	I1119 02:44:37.024142  330644 start.go:296] duration metric: took 144.272497ms for postStartSetup
	I1119 02:44:37.024490  330644 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-956139
	I1119 02:44:37.042142  330644 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/config.json ...
	I1119 02:44:37.042364  330644 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 02:44:37.042421  330644 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956139
	I1119 02:44:37.060279  330644 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/newest-cni-956139/id_rsa Username:docker}
	I1119 02:44:37.151155  330644 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 02:44:37.155487  330644 start.go:128] duration metric: took 7.068223226s to createHost
	I1119 02:44:37.155509  330644 start.go:83] releasing machines lock for "newest-cni-956139", held for 7.068353821s
	I1119 02:44:37.155567  330644 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-956139
	I1119 02:44:37.172738  330644 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 02:44:37.172750  330644 ssh_runner.go:195] Run: cat /version.json
	I1119 02:44:37.172802  330644 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956139
	I1119 02:44:37.172817  330644 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956139
	I1119 02:44:37.191403  330644 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/newest-cni-956139/id_rsa Username:docker}
	I1119 02:44:37.191761  330644 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/newest-cni-956139/id_rsa Username:docker}
	I1119 02:44:37.349781  330644 ssh_runner.go:195] Run: systemctl --version
	I1119 02:44:37.356447  330644 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 02:44:37.390971  330644 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 02:44:37.395386  330644 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 02:44:37.395452  330644 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 02:44:37.420966  330644 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1119 02:44:37.421000  330644 start.go:496] detecting cgroup driver to use...
	I1119 02:44:37.421031  330644 detect.go:190] detected "systemd" cgroup driver on host os
	I1119 02:44:37.421116  330644 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 02:44:37.437016  330644 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 02:44:37.448636  330644 docker.go:218] disabling cri-docker service (if available) ...
	I1119 02:44:37.448680  330644 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 02:44:37.464103  330644 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 02:44:37.483229  330644 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 02:44:37.569719  330644 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 02:44:37.663891  330644 docker.go:234] disabling docker service ...
	I1119 02:44:37.663946  330644 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 02:44:37.684672  330644 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 02:44:37.699707  330644 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 02:44:37.783938  330644 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 02:44:37.866466  330644 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 02:44:37.878906  330644 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 02:44:37.893148  330644 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 02:44:37.893200  330644 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:44:37.903765  330644 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1119 02:44:37.903825  330644 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:44:37.912380  330644 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:44:37.922240  330644 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:44:37.930944  330644 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 02:44:37.938625  330644 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:44:37.947066  330644 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:44:37.960171  330644 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:44:37.968261  330644 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 02:44:37.975267  330644 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 02:44:37.982398  330644 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:44:38.060067  330644 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 02:44:38.192960  330644 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 02:44:38.193022  330644 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 02:44:38.196763  330644 start.go:564] Will wait 60s for crictl version
	I1119 02:44:38.196824  330644 ssh_runner.go:195] Run: which crictl
	I1119 02:44:38.200161  330644 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 02:44:38.225001  330644 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1119 02:44:38.225065  330644 ssh_runner.go:195] Run: crio --version
	I1119 02:44:38.251944  330644 ssh_runner.go:195] Run: crio --version
	I1119 02:44:38.282138  330644 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1119 02:44:38.283487  330644 cli_runner.go:164] Run: docker network inspect newest-cni-956139 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 02:44:38.300312  330644 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1119 02:44:38.304280  330644 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 02:44:38.315573  330644 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1119 02:44:36.325065  321785 pod_ready.go:104] pod "coredns-66bc5c9577-bht2q" is not "Ready", error: <nil>
	W1119 02:44:38.824893  321785 pod_ready.go:104] pod "coredns-66bc5c9577-bht2q" is not "Ready", error: <nil>
	I1119 02:44:38.316650  330644 kubeadm.go:884] updating cluster {Name:newest-cni-956139 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-956139 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 02:44:38.316772  330644 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 02:44:38.316823  330644 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 02:44:38.347925  330644 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 02:44:38.347943  330644 crio.go:433] Images already preloaded, skipping extraction
	I1119 02:44:38.348024  330644 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 02:44:38.371370  330644 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 02:44:38.371386  330644 cache_images.go:86] Images are preloaded, skipping loading
	I1119 02:44:38.371393  330644 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1119 02:44:38.371489  330644 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-956139 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-956139 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 02:44:38.371568  330644 ssh_runner.go:195] Run: crio config
	I1119 02:44:38.414403  330644 cni.go:84] Creating CNI manager for ""
	I1119 02:44:38.414425  330644 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 02:44:38.414455  330644 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1119 02:44:38.414480  330644 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-956139 NodeName:newest-cni-956139 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 02:44:38.414596  330644 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-956139"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 02:44:38.414650  330644 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 02:44:38.422980  330644 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 02:44:38.423037  330644 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 02:44:38.430764  330644 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1119 02:44:38.442899  330644 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 02:44:38.457503  330644 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1119 02:44:38.470194  330644 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1119 02:44:38.473583  330644 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 02:44:38.482869  330644 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:44:38.562300  330644 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 02:44:38.585622  330644 certs.go:69] Setting up /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139 for IP: 192.168.76.2
	I1119 02:44:38.585639  330644 certs.go:195] generating shared ca certs ...
	I1119 02:44:38.585658  330644 certs.go:227] acquiring lock for ca certs: {Name:mkfa94aafd627ee4c1a185e05aa520339d3c22d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:44:38.585812  330644 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21924-11126/.minikube/ca.key
	I1119 02:44:38.585880  330644 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21924-11126/.minikube/proxy-client-ca.key
	I1119 02:44:38.585900  330644 certs.go:257] generating profile certs ...
	I1119 02:44:38.585973  330644 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/client.key
	I1119 02:44:38.585994  330644 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/client.crt with IP's: []
	I1119 02:44:38.886736  330644 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/client.crt ...
	I1119 02:44:38.886761  330644 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/client.crt: {Name:mkb981b48727217d5d544f8c1ece639a24196b8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:44:38.886914  330644 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/client.key ...
	I1119 02:44:38.886927  330644 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/client.key: {Name:mkf09d335927b94ecd83db709f24055ce131f9c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:44:38.887002  330644 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/apiserver.key.107d1e6d
	I1119 02:44:38.887016  330644 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/apiserver.crt.107d1e6d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1119 02:44:39.078031  330644 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/apiserver.crt.107d1e6d ...
	I1119 02:44:39.078059  330644 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/apiserver.crt.107d1e6d: {Name:mkcff50d0bd0e5de553650f0790abc33df1f3d40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:44:39.078203  330644 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/apiserver.key.107d1e6d ...
	I1119 02:44:39.078217  330644 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/apiserver.key.107d1e6d: {Name:mk332d91d4c4926805e4ae3abcbd91571604bef5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:44:39.078314  330644 certs.go:382] copying /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/apiserver.crt.107d1e6d -> /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/apiserver.crt
	I1119 02:44:39.078410  330644 certs.go:386] copying /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/apiserver.key.107d1e6d -> /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/apiserver.key
	I1119 02:44:39.078500  330644 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/proxy-client.key
	I1119 02:44:39.078517  330644 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/proxy-client.crt with IP's: []
	I1119 02:44:39.492473  330644 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/proxy-client.crt ...
	I1119 02:44:39.492501  330644 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/proxy-client.crt: {Name:mk2d2a0752005ddbf3ff7866b2d888f6c88921c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:44:39.492685  330644 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/proxy-client.key ...
	I1119 02:44:39.492708  330644 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/proxy-client.key: {Name:mk0676b22a9381558c3b1f8b4d9f9ded76cf6a58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:44:39.492943  330644 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/14634.pem (1338 bytes)
	W1119 02:44:39.492986  330644 certs.go:480] ignoring /home/jenkins/minikube-integration/21924-11126/.minikube/certs/14634_empty.pem, impossibly tiny 0 bytes
	I1119 02:44:39.493002  330644 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca-key.pem (1671 bytes)
	I1119 02:44:39.493035  330644 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem (1082 bytes)
	I1119 02:44:39.493063  330644 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/cert.pem (1123 bytes)
	I1119 02:44:39.493096  330644 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/key.pem (1675 bytes)
	I1119 02:44:39.493152  330644 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/files/etc/ssl/certs/146342.pem (1708 bytes)
	I1119 02:44:39.493921  330644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 02:44:39.511675  330644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1119 02:44:39.528321  330644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 02:44:39.545416  330644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1119 02:44:39.561752  330644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1119 02:44:39.578259  330644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1119 02:44:39.594332  330644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 02:44:39.610201  330644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1119 02:44:39.626532  330644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/files/etc/ssl/certs/146342.pem --> /usr/share/ca-certificates/146342.pem (1708 bytes)
	I1119 02:44:39.646920  330644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 02:44:39.663725  330644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/certs/14634.pem --> /usr/share/ca-certificates/14634.pem (1338 bytes)
	I1119 02:44:39.680824  330644 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 02:44:39.692613  330644 ssh_runner.go:195] Run: openssl version
	I1119 02:44:39.699229  330644 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/146342.pem && ln -fs /usr/share/ca-certificates/146342.pem /etc/ssl/certs/146342.pem"
	I1119 02:44:39.708084  330644 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/146342.pem
	I1119 02:44:39.711716  330644 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 02:02 /usr/share/ca-certificates/146342.pem
	I1119 02:44:39.711771  330644 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/146342.pem
	I1119 02:44:39.746645  330644 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/146342.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 02:44:39.754713  330644 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 02:44:39.762929  330644 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:44:39.766299  330644 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 01:56 /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:44:39.766335  330644 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:44:39.800570  330644 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 02:44:39.808541  330644 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14634.pem && ln -fs /usr/share/ca-certificates/14634.pem /etc/ssl/certs/14634.pem"
	I1119 02:44:39.816270  330644 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14634.pem
	I1119 02:44:39.819952  330644 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 02:02 /usr/share/ca-certificates/14634.pem
	I1119 02:44:39.819989  330644 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14634.pem
	I1119 02:44:39.854738  330644 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14634.pem /etc/ssl/certs/51391683.0"
	I1119 02:44:39.863275  330644 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 02:44:39.866811  330644 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1119 02:44:39.866866  330644 kubeadm.go:401] StartCluster: {Name:newest-cni-956139 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-956139 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:44:39.866959  330644 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 02:44:39.867032  330644 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 02:44:39.893234  330644 cri.go:89] found id: ""
	I1119 02:44:39.893298  330644 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 02:44:39.901084  330644 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1119 02:44:39.908779  330644 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1119 02:44:39.908820  330644 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1119 02:44:39.915918  330644 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1119 02:44:39.915956  330644 kubeadm.go:158] found existing configuration files:
	
	I1119 02:44:39.916000  330644 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1119 02:44:39.924150  330644 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1119 02:44:39.924192  330644 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1119 02:44:39.931134  330644 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1119 02:44:39.938135  330644 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1119 02:44:39.938182  330644 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1119 02:44:39.945082  330644 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1119 02:44:39.952377  330644 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1119 02:44:39.952425  330644 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1119 02:44:39.959861  330644 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1119 02:44:39.966757  330644 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1119 02:44:39.966801  330644 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1119 02:44:39.973926  330644 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1119 02:44:40.012094  330644 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1119 02:44:40.012170  330644 kubeadm.go:319] [preflight] Running pre-flight checks
	I1119 02:44:40.051599  330644 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1119 02:44:40.051753  330644 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1119 02:44:40.051826  330644 kubeadm.go:319] OS: Linux
	I1119 02:44:40.051888  330644 kubeadm.go:319] CGROUPS_CPU: enabled
	I1119 02:44:40.051939  330644 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1119 02:44:40.052007  330644 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1119 02:44:40.052083  330644 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1119 02:44:40.052163  330644 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1119 02:44:40.052233  330644 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1119 02:44:40.052284  330644 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1119 02:44:40.052344  330644 kubeadm.go:319] CGROUPS_IO: enabled
	I1119 02:44:40.110629  330644 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1119 02:44:40.110786  330644 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1119 02:44:40.110919  330644 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1119 02:44:40.118761  330644 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1119 02:44:37.420903  322722 pod_ready.go:104] pod "coredns-66bc5c9577-44bdr" is not "Ready", error: <nil>
	W1119 02:44:39.920505  322722 pod_ready.go:104] pod "coredns-66bc5c9577-44bdr" is not "Ready", error: <nil>
	I1119 02:44:40.823992  321785 pod_ready.go:94] pod "coredns-66bc5c9577-bht2q" is "Ready"
	I1119 02:44:40.824024  321785 pod_ready.go:86] duration metric: took 34.00468535s for pod "coredns-66bc5c9577-bht2q" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:40.826065  321785 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-167150" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:40.829510  321785 pod_ready.go:94] pod "etcd-default-k8s-diff-port-167150" is "Ready"
	I1119 02:44:40.829533  321785 pod_ready.go:86] duration metric: took 3.445845ms for pod "etcd-default-k8s-diff-port-167150" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:40.831135  321785 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-167150" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:40.834490  321785 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-167150" is "Ready"
	I1119 02:44:40.834508  321785 pod_ready.go:86] duration metric: took 3.353905ms for pod "kube-apiserver-default-k8s-diff-port-167150" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:40.836222  321785 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-167150" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:41.022776  321785 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-167150" is "Ready"
	I1119 02:44:41.022802  321785 pod_ready.go:86] duration metric: took 186.560827ms for pod "kube-controller-manager-default-k8s-diff-port-167150" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:41.222650  321785 pod_ready.go:83] waiting for pod "kube-proxy-8gl4n" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:41.623243  321785 pod_ready.go:94] pod "kube-proxy-8gl4n" is "Ready"
	I1119 02:44:41.623276  321785 pod_ready.go:86] duration metric: took 400.60046ms for pod "kube-proxy-8gl4n" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:41.823313  321785 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-167150" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:42.222639  321785 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-167150" is "Ready"
	I1119 02:44:42.222665  321785 pod_ready.go:86] duration metric: took 399.326737ms for pod "kube-scheduler-default-k8s-diff-port-167150" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:42.222675  321785 pod_ready.go:40] duration metric: took 35.410146964s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 02:44:42.265461  321785 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1119 02:44:42.267962  321785 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-167150" cluster and "default" namespace by default
	I1119 02:44:40.120572  330644 out.go:252]   - Generating certificates and keys ...
	I1119 02:44:40.120676  330644 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1119 02:44:40.120767  330644 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1119 02:44:40.285783  330644 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1119 02:44:40.596128  330644 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1119 02:44:40.775594  330644 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1119 02:44:40.856728  330644 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1119 02:44:41.447992  330644 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1119 02:44:41.448141  330644 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-956139] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1119 02:44:42.120936  330644 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1119 02:44:42.121139  330644 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-956139] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1119 02:44:42.400506  330644 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1119 02:44:42.544344  330644 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1119 02:44:42.820587  330644 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1119 02:44:42.820689  330644 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1119 02:44:42.995265  330644 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1119 02:44:43.162291  330644 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1119 02:44:43.196763  330644 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1119 02:44:43.556128  330644 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1119 02:44:43.787728  330644 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1119 02:44:43.788303  330644 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1119 02:44:43.792218  330644 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1119 02:44:43.793609  330644 out.go:252]   - Booting up control plane ...
	I1119 02:44:43.793714  330644 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1119 02:44:43.793818  330644 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1119 02:44:43.794447  330644 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1119 02:44:43.811365  330644 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1119 02:44:43.811606  330644 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1119 02:44:43.817701  330644 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1119 02:44:43.818010  330644 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1119 02:44:43.818083  330644 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1119 02:44:43.912675  330644 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1119 02:44:43.912849  330644 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	W1119 02:44:42.419894  322722 pod_ready.go:104] pod "coredns-66bc5c9577-44bdr" is not "Ready", error: <nil>
	W1119 02:44:44.921381  322722 pod_ready.go:104] pod "coredns-66bc5c9577-44bdr" is not "Ready", error: <nil>
	I1119 02:44:46.419827  322722 pod_ready.go:94] pod "coredns-66bc5c9577-44bdr" is "Ready"
	I1119 02:44:46.419857  322722 pod_ready.go:86] duration metric: took 38.00494675s for pod "coredns-66bc5c9577-44bdr" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:46.422128  322722 pod_ready.go:83] waiting for pod "etcd-no-preload-837474" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:46.425877  322722 pod_ready.go:94] pod "etcd-no-preload-837474" is "Ready"
	I1119 02:44:46.425901  322722 pod_ready.go:86] duration metric: took 3.744715ms for pod "etcd-no-preload-837474" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:46.427596  322722 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-837474" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:46.430915  322722 pod_ready.go:94] pod "kube-apiserver-no-preload-837474" is "Ready"
	I1119 02:44:46.430936  322722 pod_ready.go:86] duration metric: took 3.318971ms for pod "kube-apiserver-no-preload-837474" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:46.432827  322722 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-837474" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:46.619267  322722 pod_ready.go:94] pod "kube-controller-manager-no-preload-837474" is "Ready"
	I1119 02:44:46.619298  322722 pod_ready.go:86] duration metric: took 186.448054ms for pod "kube-controller-manager-no-preload-837474" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:46.819349  322722 pod_ready.go:83] waiting for pod "kube-proxy-hmxzk" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:47.219089  322722 pod_ready.go:94] pod "kube-proxy-hmxzk" is "Ready"
	I1119 02:44:47.219115  322722 pod_ready.go:86] duration metric: took 399.745795ms for pod "kube-proxy-hmxzk" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:47.418899  322722 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-837474" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:47.819293  322722 pod_ready.go:94] pod "kube-scheduler-no-preload-837474" is "Ready"
	I1119 02:44:47.819318  322722 pod_ready.go:86] duration metric: took 400.396392ms for pod "kube-scheduler-no-preload-837474" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:47.819332  322722 pod_ready.go:40] duration metric: took 39.409998426s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 02:44:47.882918  322722 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1119 02:44:47.884667  322722 out.go:179] * Done! kubectl is now configured to use "no-preload-837474" cluster and "default" namespace by default
	I1119 02:44:44.914267  330644 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001584412s
	I1119 02:44:44.919834  330644 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1119 02:44:44.919954  330644 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1119 02:44:44.920098  330644 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1119 02:44:44.920202  330644 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1119 02:44:46.082445  330644 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.162579737s
	I1119 02:44:46.762642  330644 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.842786839s
	I1119 02:44:48.421451  330644 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501654588s
	I1119 02:44:48.432989  330644 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1119 02:44:48.442965  330644 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1119 02:44:48.450246  330644 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1119 02:44:48.450564  330644 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-956139 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1119 02:44:48.457630  330644 kubeadm.go:319] [bootstrap-token] Using token: bpq1za.q7wy15mme3dprzfy
	I1119 02:44:48.458785  330644 out.go:252]   - Configuring RBAC rules ...
	I1119 02:44:48.458936  330644 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1119 02:44:48.461935  330644 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1119 02:44:48.466914  330644 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1119 02:44:48.469590  330644 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1119 02:44:48.472718  330644 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1119 02:44:48.475031  330644 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1119 02:44:48.827275  330644 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1119 02:44:49.241863  330644 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1119 02:44:49.827545  330644 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1119 02:44:49.828386  330644 kubeadm.go:319] 
	I1119 02:44:49.828472  330644 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1119 02:44:49.828485  330644 kubeadm.go:319] 
	I1119 02:44:49.828608  330644 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1119 02:44:49.828625  330644 kubeadm.go:319] 
	I1119 02:44:49.828650  330644 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1119 02:44:49.828731  330644 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1119 02:44:49.828818  330644 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1119 02:44:49.828832  330644 kubeadm.go:319] 
	I1119 02:44:49.828906  330644 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1119 02:44:49.828916  330644 kubeadm.go:319] 
	I1119 02:44:49.828980  330644 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1119 02:44:49.828990  330644 kubeadm.go:319] 
	I1119 02:44:49.829055  330644 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1119 02:44:49.829166  330644 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1119 02:44:49.829226  330644 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1119 02:44:49.829233  330644 kubeadm.go:319] 
	I1119 02:44:49.829341  330644 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1119 02:44:49.829450  330644 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1119 02:44:49.829464  330644 kubeadm.go:319] 
	I1119 02:44:49.829567  330644 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token bpq1za.q7wy15mme3dprzfy \
	I1119 02:44:49.829694  330644 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4165d7789cb5b41e2fdbecffedd9aece71a7c390b7f673c978e979d9f1b4ab32 \
	I1119 02:44:49.829727  330644 kubeadm.go:319] 	--control-plane 
	I1119 02:44:49.829737  330644 kubeadm.go:319] 
	I1119 02:44:49.829830  330644 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1119 02:44:49.829840  330644 kubeadm.go:319] 
	I1119 02:44:49.829940  330644 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token bpq1za.q7wy15mme3dprzfy \
	I1119 02:44:49.830063  330644 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4165d7789cb5b41e2fdbecffedd9aece71a7c390b7f673c978e979d9f1b4ab32 
	I1119 02:44:49.832633  330644 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1119 02:44:49.832729  330644 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1119 02:44:49.832752  330644 cni.go:84] Creating CNI manager for ""
	I1119 02:44:49.832761  330644 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 02:44:49.834994  330644 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1119 02:44:49.836244  330644 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1119 02:44:49.840560  330644 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1119 02:44:49.840576  330644 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1119 02:44:49.852577  330644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	
	
	==> CRI-O <==
	Nov 19 02:44:12 embed-certs-811173 crio[565]: time="2025-11-19T02:44:12.031646273Z" level=info msg="Started container" PID=1744 containerID=efe6e882130677370fb7e797e7ec99ccd1c65328b8552639fdbb47039cad64e5 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cvcns/dashboard-metrics-scraper id=209763d3-2bb2-4953-ba3a-d631212257d0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b375ed06ba298eb5b9adf64ed48dfd7a8d0e3b7fe91644d14ba680c1b574f9dc
	Nov 19 02:44:12 embed-certs-811173 crio[565]: time="2025-11-19T02:44:12.98129437Z" level=info msg="Removing container: c7a10a12fb73d562dedac0ef67ec7f7db10ae17c3abf93841264c0f9068cdd13" id=58223b52-559d-48c7-bbb3-da85eb853c4f name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 19 02:44:13 embed-certs-811173 crio[565]: time="2025-11-19T02:44:13.02261412Z" level=info msg="Removed container c7a10a12fb73d562dedac0ef67ec7f7db10ae17c3abf93841264c0f9068cdd13: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cvcns/dashboard-metrics-scraper" id=58223b52-559d-48c7-bbb3-da85eb853c4f name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 19 02:44:27 embed-certs-811173 crio[565]: time="2025-11-19T02:44:27.896251359Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=fe4c9d93-25ea-4773-9baf-efdb9ff88062 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 02:44:27 embed-certs-811173 crio[565]: time="2025-11-19T02:44:27.897211184Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=94d10610-8467-45d3-81c8-fa430d2ba178 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 02:44:27 embed-certs-811173 crio[565]: time="2025-11-19T02:44:27.898222988Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cvcns/dashboard-metrics-scraper" id=5e5a56c7-f360-4832-947f-7acdd3512229 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 02:44:27 embed-certs-811173 crio[565]: time="2025-11-19T02:44:27.898368984Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:44:27 embed-certs-811173 crio[565]: time="2025-11-19T02:44:27.903810069Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:44:27 embed-certs-811173 crio[565]: time="2025-11-19T02:44:27.904299695Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:44:27 embed-certs-811173 crio[565]: time="2025-11-19T02:44:27.935563695Z" level=info msg="Created container 684aff13b7f3d8b49f89e310e16d2708b26afd7ddd0b11590c24b5ee6fb5638d: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cvcns/dashboard-metrics-scraper" id=5e5a56c7-f360-4832-947f-7acdd3512229 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 02:44:27 embed-certs-811173 crio[565]: time="2025-11-19T02:44:27.936124461Z" level=info msg="Starting container: 684aff13b7f3d8b49f89e310e16d2708b26afd7ddd0b11590c24b5ee6fb5638d" id=05a8bad4-76a9-4442-9195-5750f16a8d3b name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 02:44:27 embed-certs-811173 crio[565]: time="2025-11-19T02:44:27.937934982Z" level=info msg="Started container" PID=1754 containerID=684aff13b7f3d8b49f89e310e16d2708b26afd7ddd0b11590c24b5ee6fb5638d description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cvcns/dashboard-metrics-scraper id=05a8bad4-76a9-4442-9195-5750f16a8d3b name=/runtime.v1.RuntimeService/StartContainer sandboxID=b375ed06ba298eb5b9adf64ed48dfd7a8d0e3b7fe91644d14ba680c1b574f9dc
	Nov 19 02:44:28 embed-certs-811173 crio[565]: time="2025-11-19T02:44:28.020883473Z" level=info msg="Removing container: efe6e882130677370fb7e797e7ec99ccd1c65328b8552639fdbb47039cad64e5" id=2ed5b5b5-06ac-43b0-b733-e215176c6c66 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 19 02:44:28 embed-certs-811173 crio[565]: time="2025-11-19T02:44:28.030021725Z" level=info msg="Removed container efe6e882130677370fb7e797e7ec99ccd1c65328b8552639fdbb47039cad64e5: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cvcns/dashboard-metrics-scraper" id=2ed5b5b5-06ac-43b0-b733-e215176c6c66 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 19 02:44:30 embed-certs-811173 crio[565]: time="2025-11-19T02:44:30.028770429Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=29f1893f-43ac-4d23-90d6-264e6b6ac1dd name=/runtime.v1.ImageService/ImageStatus
	Nov 19 02:44:30 embed-certs-811173 crio[565]: time="2025-11-19T02:44:30.029746598Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=ed009695-3dbd-4e8c-8bfd-314404daca4e name=/runtime.v1.ImageService/ImageStatus
	Nov 19 02:44:30 embed-certs-811173 crio[565]: time="2025-11-19T02:44:30.030964295Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=7a378216-5e08-4233-b6a0-1808bad3cd9f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 02:44:30 embed-certs-811173 crio[565]: time="2025-11-19T02:44:30.03135499Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:44:30 embed-certs-811173 crio[565]: time="2025-11-19T02:44:30.037741267Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:44:30 embed-certs-811173 crio[565]: time="2025-11-19T02:44:30.038384917Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/a0eb3d3406653b9d587d91bc187656d46e59d92ff943b22bc5dbda545d2e80ad/merged/etc/passwd: no such file or directory"
	Nov 19 02:44:30 embed-certs-811173 crio[565]: time="2025-11-19T02:44:30.038425463Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/a0eb3d3406653b9d587d91bc187656d46e59d92ff943b22bc5dbda545d2e80ad/merged/etc/group: no such file or directory"
	Nov 19 02:44:30 embed-certs-811173 crio[565]: time="2025-11-19T02:44:30.038885644Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:44:30 embed-certs-811173 crio[565]: time="2025-11-19T02:44:30.067638806Z" level=info msg="Created container 93226249c61ba23fe7678a2d4d31aafafa7d131e89b8004db43a8fef6f648222: kube-system/storage-provisioner/storage-provisioner" id=7a378216-5e08-4233-b6a0-1808bad3cd9f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 02:44:30 embed-certs-811173 crio[565]: time="2025-11-19T02:44:30.068248897Z" level=info msg="Starting container: 93226249c61ba23fe7678a2d4d31aafafa7d131e89b8004db43a8fef6f648222" id=1327fb43-e13a-4c95-a151-1296da9bc67f name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 02:44:30 embed-certs-811173 crio[565]: time="2025-11-19T02:44:30.070080558Z" level=info msg="Started container" PID=1768 containerID=93226249c61ba23fe7678a2d4d31aafafa7d131e89b8004db43a8fef6f648222 description=kube-system/storage-provisioner/storage-provisioner id=1327fb43-e13a-4c95-a151-1296da9bc67f name=/runtime.v1.RuntimeService/StartContainer sandboxID=462cb72b0d500ef960250d7128cef835c1d31c2bce8abe377be8880d71c5622f
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	93226249c61ba       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           22 seconds ago      Running             storage-provisioner         1                   462cb72b0d500       storage-provisioner                          kube-system
	684aff13b7f3d       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           24 seconds ago      Exited              dashboard-metrics-scraper   2                   b375ed06ba298       dashboard-metrics-scraper-6ffb444bf9-cvcns   kubernetes-dashboard
	cca02da00a467       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   43 seconds ago      Running             kubernetes-dashboard        0                   65a946e37ec45       kubernetes-dashboard-855c9754f9-22wsb        kubernetes-dashboard
	3579214a76a5b       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           53 seconds ago      Running             busybox                     1                   9920d7d5dbb75       busybox                                      default
	b2cdca1146afc       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           53 seconds ago      Running             coredns                     0                   abdee3af82196       coredns-66bc5c9577-6zqr2                     kube-system
	c54f63e61cd62       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           53 seconds ago      Exited              storage-provisioner         0                   462cb72b0d500       storage-provisioner                          kube-system
	aa517f916a402       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           53 seconds ago      Running             kindnet-cni                 0                   6a7d11717fc7d       kindnet-b2w9g                                kube-system
	b23c1eb2d226e       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           53 seconds ago      Running             kube-proxy                  0                   b4275d476f2c6       kube-proxy-s5bzz                             kube-system
	e0994ea947678       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           56 seconds ago      Running             kube-scheduler              0                   2cca5d5d90e11       kube-scheduler-embed-certs-811173            kube-system
	b9603bf135a48       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           56 seconds ago      Running             kube-apiserver              0                   cdec74f3d5b9f       kube-apiserver-embed-certs-811173            kube-system
	05974f8fe2ed9       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           56 seconds ago      Running             kube-controller-manager     0                   14e681817667b       kube-controller-manager-embed-certs-811173   kube-system
	706b2dbda2d38       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           56 seconds ago      Running             etcd                        0                   e080a952b1840       etcd-embed-certs-811173                      kube-system
	
	
	==> coredns [b2cdca1146afc3b622739f950e64864d26eba81ead7099474120baed23ee6f0e] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] 127.0.0.1:51116 - 40744 "HINFO IN 6888089067274774528.5535670819203862021. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.490073089s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-811173
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-811173
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ebd49efe8b7d44f89fc96e21beffcd64dfee8277
	                    minikube.k8s.io/name=embed-certs-811173
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T02_43_02_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 02:42:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-811173
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 02:44:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 02:44:28 +0000   Wed, 19 Nov 2025 02:42:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 02:44:28 +0000   Wed, 19 Nov 2025 02:42:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 02:44:28 +0000   Wed, 19 Nov 2025 02:42:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 02:44:28 +0000   Wed, 19 Nov 2025 02:43:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-811173
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863340Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863340Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf10fb2f940d419c1d138723691cfee8
	  System UUID:                c32197b9-e1d7-4c8f-bcdd-84def1c02350
	  Boot ID:                    b74e7f3d-91af-4c1b-82f6-1745dfd2e678
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 coredns-66bc5c9577-6zqr2                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     105s
	  kube-system                 etcd-embed-certs-811173                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         111s
	  kube-system                 kindnet-b2w9g                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      106s
	  kube-system                 kube-apiserver-embed-certs-811173             250m (3%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-controller-manager-embed-certs-811173    200m (2%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-proxy-s5bzz                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kube-scheduler-embed-certs-811173             100m (1%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-cvcns    0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-22wsb         0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 105s                 kube-proxy       
	  Normal  Starting                 53s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  115s (x8 over 115s)  kubelet          Node embed-certs-811173 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    115s (x8 over 115s)  kubelet          Node embed-certs-811173 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     115s (x8 over 115s)  kubelet          Node embed-certs-811173 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     111s                 kubelet          Node embed-certs-811173 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  111s                 kubelet          Node embed-certs-811173 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    111s                 kubelet          Node embed-certs-811173 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 111s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           106s                 node-controller  Node embed-certs-811173 event: Registered Node embed-certs-811173 in Controller
	  Normal  NodeReady                94s                  kubelet          Node embed-certs-811173 status is now: NodeReady
	  Normal  Starting                 57s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  57s (x8 over 57s)    kubelet          Node embed-certs-811173 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    57s (x8 over 57s)    kubelet          Node embed-certs-811173 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     57s (x8 over 57s)    kubelet          Node embed-certs-811173 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           51s                  node-controller  Node embed-certs-811173 event: Registered Node embed-certs-811173 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: c6 2a 10 38 98 ce 7e af 25 93 50 8b 08 00
	[Nov19 02:40] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 69 19 13 d2 34 08 06
	[  +0.000303] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff de 82 c7 57 ef 49 08 06
	[Nov19 02:41] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 82 b6 eb cd 7e a7 08 06
	[  +0.001170] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 6a 20 a4 3b 82 10 08 06
	[ +12.842438] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 1e 35 97 65 5e 2e 08 06
	[  +4.187285] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ba 20 d4 3c 25 0c 08 06
	[ +19.742639] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6e e8 d1 08 45 d2 08 06
	[  +0.000327] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ba 20 d4 3c 25 0c 08 06
	[Nov19 02:42] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 2b 58 8a 05 dc 08 06
	[  +0.000340] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 b6 eb cd 7e a7 08 06
	[ +10.661146] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b6 1d bb 8d c6 48 08 06
	[  +0.000382] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 1e 35 97 65 5e 2e 08 06
	
	
	==> etcd [706b2dbda2d38ebc2ca3e61f6b17e96a3d75c375c204a2bcebbf88ede678a129] <==
	{"level":"warn","ts":"2025-11-19T02:43:57.471176Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:43:57.478793Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:43:57.484698Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:43:57.490676Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:43:57.497223Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:43:57.508510Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:43:57.514137Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:43:57.520773Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:43:57.526610Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:43:57.532445Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:43:57.538854Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:43:57.544840Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:43:57.552322Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:43:57.558849Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:43:57.564624Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:43:57.570908Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:43:57.576707Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:43:57.600151Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:43:57.606896Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:43:57.612702Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:43:57.661705Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51920","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-19T02:44:34.791880Z","caller":"traceutil/trace.go:172","msg":"trace[1717592132] linearizableReadLoop","detail":"{readStateIndex:693; appliedIndex:693; }","duration":"141.98362ms","start":"2025-11-19T02:44:34.649873Z","end":"2025-11-19T02:44:34.791856Z","steps":["trace[1717592132] 'read index received'  (duration: 141.977964ms)","trace[1717592132] 'applied index is now lower than readState.Index'  (duration: 4.779µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-19T02:44:34.828493Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"178.585318ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-embed-certs-811173\" limit:1 ","response":"range_response_count:1 size:5935"}
	{"level":"info","ts":"2025-11-19T02:44:34.828594Z","caller":"traceutil/trace.go:172","msg":"trace[128514129] range","detail":"{range_begin:/registry/pods/kube-system/etcd-embed-certs-811173; range_end:; response_count:1; response_revision:658; }","duration":"178.708646ms","start":"2025-11-19T02:44:34.649870Z","end":"2025-11-19T02:44:34.828578Z","steps":["trace[128514129] 'agreement among raft nodes before linearized reading'  (duration: 142.083486ms)","trace[128514129] 'range keys from in-memory index tree'  (duration: 36.371941ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-19T02:44:34.828704Z","caller":"traceutil/trace.go:172","msg":"trace[2081923050] transaction","detail":"{read_only:false; response_revision:659; number_of_response:1; }","duration":"179.294304ms","start":"2025-11-19T02:44:34.649398Z","end":"2025-11-19T02:44:34.828692Z","steps":["trace[2081923050] 'process raft request'  (duration: 142.529146ms)","trace[2081923050] 'compare'  (duration: 36.469881ms)"],"step_count":2}
	
	
	==> kernel <==
	 02:44:52 up  1:27,  0 user,  load average: 3.21, 3.28, 2.28
	Linux embed-certs-811173 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [aa517f916a402e98a0e426261b2e8d3cf00858150d67405fda67eaf37bb6e901] <==
	I1119 02:43:59.435969       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 02:43:59.436218       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1119 02:43:59.436377       1 main.go:148] setting mtu 1500 for CNI 
	I1119 02:43:59.436394       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 02:43:59.436419       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T02:43:59Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 02:43:59.690490       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 02:43:59.690551       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 02:43:59.690565       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 02:43:59.690690       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1119 02:44:00.090348       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1119 02:44:00.090389       1 metrics.go:72] Registering metrics
	I1119 02:44:00.092489       1 controller.go:711] "Syncing nftables rules"
	I1119 02:44:09.639643       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1119 02:44:09.639723       1 main.go:301] handling current node
	I1119 02:44:19.643506       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1119 02:44:19.643568       1 main.go:301] handling current node
	I1119 02:44:29.639513       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1119 02:44:29.639586       1 main.go:301] handling current node
	I1119 02:44:39.641552       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1119 02:44:39.641593       1 main.go:301] handling current node
	I1119 02:44:49.648523       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1119 02:44:49.648556       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b9603bf135a48a7fd7f1a7df00bc5ac2ca325854631a2e9109eebbe9c579c3fc] <==
	I1119 02:43:58.107248       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1119 02:43:58.107282       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1119 02:43:58.107294       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1119 02:43:58.107613       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1119 02:43:58.107307       1 aggregator.go:171] initial CRD sync complete...
	I1119 02:43:58.107702       1 autoregister_controller.go:144] Starting autoregister controller
	I1119 02:43:58.107709       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1119 02:43:58.107716       1 cache.go:39] Caches are synced for autoregister controller
	I1119 02:43:58.107366       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1119 02:43:58.110102       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1119 02:43:58.110136       1 policy_source.go:240] refreshing policies
	I1119 02:43:58.112729       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1119 02:43:58.147653       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 02:43:58.394849       1 controller.go:667] quota admission added evaluator for: namespaces
	I1119 02:43:58.421144       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1119 02:43:58.438167       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1119 02:43:58.444958       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1119 02:43:58.450940       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1119 02:43:58.484586       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.213.84"}
	I1119 02:43:58.494475       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.251.255"}
	I1119 02:43:59.010415       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 02:44:01.784685       1 controller.go:667] quota admission added evaluator for: endpoints
	I1119 02:44:01.838848       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1119 02:44:01.889499       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1119 02:44:01.889499       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [05974f8fe2ed9b3af8b149d271de0fd120542bca0e181f00cc290f0684748003] <==
	I1119 02:44:01.348911       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-811173"
	I1119 02:44:01.348954       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1119 02:44:01.350777       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1119 02:44:01.352037       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1119 02:44:01.356501       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1119 02:44:01.378917       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1119 02:44:01.378976       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 02:44:01.378991       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1119 02:44:01.379001       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1119 02:44:01.379001       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1119 02:44:01.379004       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1119 02:44:01.379025       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1119 02:44:01.379245       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1119 02:44:01.379249       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1119 02:44:01.379364       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1119 02:44:01.380521       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1119 02:44:01.382383       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1119 02:44:01.382491       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1119 02:44:01.385682       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1119 02:44:01.391200       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1119 02:44:01.392463       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 02:44:01.394604       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1119 02:44:01.398872       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1119 02:44:01.401369       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1119 02:44:01.442545       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [b23c1eb2d226e72679956724897a9a7086eebbdc3ef47d3921d153ed58b2e05d] <==
	I1119 02:43:59.299508       1 server_linux.go:53] "Using iptables proxy"
	I1119 02:43:59.364489       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 02:43:59.465279       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 02:43:59.465339       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1119 02:43:59.465417       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 02:43:59.483490       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 02:43:59.483555       1 server_linux.go:132] "Using iptables Proxier"
	I1119 02:43:59.488637       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 02:43:59.489025       1 server.go:527] "Version info" version="v1.34.1"
	I1119 02:43:59.489054       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 02:43:59.490459       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 02:43:59.490494       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 02:43:59.490496       1 config.go:200] "Starting service config controller"
	I1119 02:43:59.490518       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 02:43:59.490510       1 config.go:106] "Starting endpoint slice config controller"
	I1119 02:43:59.490548       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 02:43:59.490554       1 config.go:309] "Starting node config controller"
	I1119 02:43:59.490570       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 02:43:59.490586       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 02:43:59.590688       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1119 02:43:59.590707       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1119 02:43:59.590732       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [e0994ea94767873e5f7aa16af71ef5155fc15391a563da35948cadb1520f80bd] <==
	I1119 02:43:56.824322       1 serving.go:386] Generated self-signed cert in-memory
	W1119 02:43:58.021674       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1119 02:43:58.021707       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1119 02:43:58.021723       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1119 02:43:58.021732       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1119 02:43:58.070148       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1119 02:43:58.070243       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 02:43:58.073584       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1119 02:43:58.073768       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1119 02:43:58.073709       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 02:43:58.073885       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 02:43:58.174679       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 19 02:44:02 embed-certs-811173 kubelet[728]: I1119 02:44:02.189588     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/98385ede-e5ef-4e37-b563-0e45839e67f5-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-cvcns\" (UID: \"98385ede-e5ef-4e37-b563-0e45839e67f5\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cvcns"
	Nov 19 02:44:02 embed-certs-811173 kubelet[728]: I1119 02:44:02.189612     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbsq4\" (UniqueName: \"kubernetes.io/projected/98385ede-e5ef-4e37-b563-0e45839e67f5-kube-api-access-gbsq4\") pod \"dashboard-metrics-scraper-6ffb444bf9-cvcns\" (UID: \"98385ede-e5ef-4e37-b563-0e45839e67f5\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cvcns"
	Nov 19 02:44:04 embed-certs-811173 kubelet[728]: I1119 02:44:04.459919     728 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 19 02:44:08 embed-certs-811173 kubelet[728]: I1119 02:44:08.980478     728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-22wsb" podStartSLOduration=0.636456724 podStartE2EDuration="6.980454113s" podCreationTimestamp="2025-11-19 02:44:02 +0000 UTC" firstStartedPulling="2025-11-19 02:44:02.418417638 +0000 UTC m=+6.629049102" lastFinishedPulling="2025-11-19 02:44:08.762415036 +0000 UTC m=+12.973046491" observedRunningTime="2025-11-19 02:44:08.979070336 +0000 UTC m=+13.189701806" watchObservedRunningTime="2025-11-19 02:44:08.980454113 +0000 UTC m=+13.191085585"
	Nov 19 02:44:11 embed-certs-811173 kubelet[728]: I1119 02:44:11.974097     728 scope.go:117] "RemoveContainer" containerID="c7a10a12fb73d562dedac0ef67ec7f7db10ae17c3abf93841264c0f9068cdd13"
	Nov 19 02:44:12 embed-certs-811173 kubelet[728]: I1119 02:44:12.979079     728 scope.go:117] "RemoveContainer" containerID="c7a10a12fb73d562dedac0ef67ec7f7db10ae17c3abf93841264c0f9068cdd13"
	Nov 19 02:44:12 embed-certs-811173 kubelet[728]: I1119 02:44:12.979246     728 scope.go:117] "RemoveContainer" containerID="efe6e882130677370fb7e797e7ec99ccd1c65328b8552639fdbb47039cad64e5"
	Nov 19 02:44:12 embed-certs-811173 kubelet[728]: E1119 02:44:12.979467     728 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-cvcns_kubernetes-dashboard(98385ede-e5ef-4e37-b563-0e45839e67f5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cvcns" podUID="98385ede-e5ef-4e37-b563-0e45839e67f5"
	Nov 19 02:44:13 embed-certs-811173 kubelet[728]: I1119 02:44:13.985694     728 scope.go:117] "RemoveContainer" containerID="efe6e882130677370fb7e797e7ec99ccd1c65328b8552639fdbb47039cad64e5"
	Nov 19 02:44:13 embed-certs-811173 kubelet[728]: E1119 02:44:13.985879     728 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-cvcns_kubernetes-dashboard(98385ede-e5ef-4e37-b563-0e45839e67f5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cvcns" podUID="98385ede-e5ef-4e37-b563-0e45839e67f5"
	Nov 19 02:44:16 embed-certs-811173 kubelet[728]: I1119 02:44:16.086482     728 scope.go:117] "RemoveContainer" containerID="efe6e882130677370fb7e797e7ec99ccd1c65328b8552639fdbb47039cad64e5"
	Nov 19 02:44:16 embed-certs-811173 kubelet[728]: E1119 02:44:16.086744     728 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-cvcns_kubernetes-dashboard(98385ede-e5ef-4e37-b563-0e45839e67f5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cvcns" podUID="98385ede-e5ef-4e37-b563-0e45839e67f5"
	Nov 19 02:44:27 embed-certs-811173 kubelet[728]: I1119 02:44:27.895787     728 scope.go:117] "RemoveContainer" containerID="efe6e882130677370fb7e797e7ec99ccd1c65328b8552639fdbb47039cad64e5"
	Nov 19 02:44:28 embed-certs-811173 kubelet[728]: I1119 02:44:28.019607     728 scope.go:117] "RemoveContainer" containerID="efe6e882130677370fb7e797e7ec99ccd1c65328b8552639fdbb47039cad64e5"
	Nov 19 02:44:28 embed-certs-811173 kubelet[728]: I1119 02:44:28.019850     728 scope.go:117] "RemoveContainer" containerID="684aff13b7f3d8b49f89e310e16d2708b26afd7ddd0b11590c24b5ee6fb5638d"
	Nov 19 02:44:28 embed-certs-811173 kubelet[728]: E1119 02:44:28.020042     728 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-cvcns_kubernetes-dashboard(98385ede-e5ef-4e37-b563-0e45839e67f5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cvcns" podUID="98385ede-e5ef-4e37-b563-0e45839e67f5"
	Nov 19 02:44:30 embed-certs-811173 kubelet[728]: I1119 02:44:30.028179     728 scope.go:117] "RemoveContainer" containerID="c54f63e61cd62e1b142da359df33d91caf60f07fa0b8e3232b02d81672c144e4"
	Nov 19 02:44:36 embed-certs-811173 kubelet[728]: I1119 02:44:36.086509     728 scope.go:117] "RemoveContainer" containerID="684aff13b7f3d8b49f89e310e16d2708b26afd7ddd0b11590c24b5ee6fb5638d"
	Nov 19 02:44:36 embed-certs-811173 kubelet[728]: E1119 02:44:36.086693     728 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-cvcns_kubernetes-dashboard(98385ede-e5ef-4e37-b563-0e45839e67f5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cvcns" podUID="98385ede-e5ef-4e37-b563-0e45839e67f5"
	Nov 19 02:44:46 embed-certs-811173 kubelet[728]: I1119 02:44:46.895778     728 scope.go:117] "RemoveContainer" containerID="684aff13b7f3d8b49f89e310e16d2708b26afd7ddd0b11590c24b5ee6fb5638d"
	Nov 19 02:44:46 embed-certs-811173 kubelet[728]: E1119 02:44:46.895974     728 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-cvcns_kubernetes-dashboard(98385ede-e5ef-4e37-b563-0e45839e67f5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cvcns" podUID="98385ede-e5ef-4e37-b563-0e45839e67f5"
	Nov 19 02:44:48 embed-certs-811173 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 19 02:44:48 embed-certs-811173 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 19 02:44:48 embed-certs-811173 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 19 02:44:48 embed-certs-811173 systemd[1]: kubelet.service: Consumed 1.629s CPU time.
	
	
	==> kubernetes-dashboard [cca02da00a4676afe504adf5be3a8411759a7aeae1cf8b33d87c2969c8b35ee0] <==
	2025/11/19 02:44:08 Starting overwatch
	2025/11/19 02:44:08 Using namespace: kubernetes-dashboard
	2025/11/19 02:44:08 Using in-cluster config to connect to apiserver
	2025/11/19 02:44:08 Using secret token for csrf signing
	2025/11/19 02:44:08 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/19 02:44:08 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/19 02:44:08 Successful initial request to the apiserver, version: v1.34.1
	2025/11/19 02:44:08 Generating JWE encryption key
	2025/11/19 02:44:08 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/19 02:44:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/19 02:44:09 Initializing JWE encryption key from synchronized object
	2025/11/19 02:44:09 Creating in-cluster Sidecar client
	2025/11/19 02:44:09 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/19 02:44:09 Serving insecurely on HTTP port: 9090
	2025/11/19 02:44:39 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [93226249c61ba23fe7678a2d4d31aafafa7d131e89b8004db43a8fef6f648222] <==
	I1119 02:44:30.082948       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1119 02:44:30.092147       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1119 02:44:30.092196       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1119 02:44:30.094327       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:44:33.548566       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:44:37.809569       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:44:41.407989       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:44:44.461315       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:44:47.483832       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:44:47.488176       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 02:44:47.488341       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1119 02:44:47.488484       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d6ae127c-c859-4ddd-8bc9-6532cea887ea", APIVersion:"v1", ResourceVersion:"666", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-811173_c9984f80-c805-4726-b8ca-3bac7548e455 became leader
	I1119 02:44:47.488539       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-811173_c9984f80-c805-4726-b8ca-3bac7548e455!
	W1119 02:44:47.490475       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:44:47.493942       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 02:44:47.589693       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-811173_c9984f80-c805-4726-b8ca-3bac7548e455!
	W1119 02:44:49.497727       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:44:49.502085       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:44:51.505475       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:44:51.511238       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [c54f63e61cd62e1b142da359df33d91caf60f07fa0b8e3232b02d81672c144e4] <==
	I1119 02:43:59.268652       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1119 02:44:29.271824       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-811173 -n embed-certs-811173
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-811173 -n embed-certs-811173: exit status 2 (326.578298ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-811173 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (5.65s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (5.56s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-167150 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p default-k8s-diff-port-167150 --alsologtostderr -v=1: exit status 80 (1.724536911s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-167150 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 02:44:54.021343  335494 out.go:360] Setting OutFile to fd 1 ...
	I1119 02:44:54.021631  335494 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:44:54.021642  335494 out.go:374] Setting ErrFile to fd 2...
	I1119 02:44:54.021646  335494 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:44:54.021857  335494 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-11126/.minikube/bin
	I1119 02:44:54.022058  335494 out.go:368] Setting JSON to false
	I1119 02:44:54.022095  335494 mustload.go:66] Loading cluster: default-k8s-diff-port-167150
	I1119 02:44:54.022383  335494 config.go:182] Loaded profile config "default-k8s-diff-port-167150": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:44:54.022773  335494 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-167150 --format={{.State.Status}}
	I1119 02:44:54.041345  335494 host.go:66] Checking if "default-k8s-diff-port-167150" exists ...
	I1119 02:44:54.041592  335494 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 02:44:54.100043  335494 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:86 SystemTime:2025-11-19 02:44:54.089110783 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652060160 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 02:44:54.100670  335494 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-167150 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1119 02:44:54.102405  335494 out.go:179] * Pausing node default-k8s-diff-port-167150 ... 
	I1119 02:44:54.103591  335494 host.go:66] Checking if "default-k8s-diff-port-167150" exists ...
	I1119 02:44:54.103889  335494 ssh_runner.go:195] Run: systemctl --version
	I1119 02:44:54.103928  335494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-167150
	I1119 02:44:54.121692  335494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/default-k8s-diff-port-167150/id_rsa Username:docker}
	I1119 02:44:54.215310  335494 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 02:44:54.228373  335494 pause.go:52] kubelet running: true
	I1119 02:44:54.228459  335494 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1119 02:44:54.394184  335494 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1119 02:44:54.394268  335494 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1119 02:44:54.461937  335494 cri.go:89] found id: "c8385e0cc96e047d9f413c86fd578dff340203a9b9a1ef224657235179328132"
	I1119 02:44:54.461981  335494 cri.go:89] found id: "eeac877f14e22971c9a442dc5730f94bf48becadd83cf5c234243e980bc2e2dd"
	I1119 02:44:54.461986  335494 cri.go:89] found id: "f7ef5557ab210d9505a17d76636e0b17ddac6b55834fdc5d6452172261f6d65e"
	I1119 02:44:54.461991  335494 cri.go:89] found id: "1a72617903e6314079ce5b0564a60457c183f68e6e318bd4b089000d5050df80"
	I1119 02:44:54.461994  335494 cri.go:89] found id: "84cc1d377e54ec20e1b335bb3f2c1a89459c779c9092721db31a065e74db7d72"
	I1119 02:44:54.461999  335494 cri.go:89] found id: "0850d32773d1729f97e0f3baf42d1b3638a7327abc66f584efafbdaa4334a283"
	I1119 02:44:54.462003  335494 cri.go:89] found id: "299bbab984622e99c9bf240099fd1891299f48da807c2b0ab1553ad4885d7c13"
	I1119 02:44:54.462007  335494 cri.go:89] found id: "7cdb91f63703193832fa8fc84ec766b4d87e2ac3e24887dcbcb074dfdac9634d"
	I1119 02:44:54.462010  335494 cri.go:89] found id: "f308d3728814cf13897a458da3b827483ae71b6a4cf2cb0fd38e141e14586a3e"
	I1119 02:44:54.462030  335494 cri.go:89] found id: "bcec9b49dd5da59561dcca13fbccd1ddc4abfb2a5907b4002476e334cc41c669"
	I1119 02:44:54.462034  335494 cri.go:89] found id: "85d4c9fe32d3e2d0a6c2edf0b44354e46f03306212318e808dee8bf625ac5497"
	I1119 02:44:54.462038  335494 cri.go:89] found id: ""
	I1119 02:44:54.462097  335494 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 02:44:54.476637  335494 retry.go:31] will retry after 301.68808ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:44:54Z" level=error msg="open /run/runc: no such file or directory"
	I1119 02:44:54.779148  335494 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 02:44:54.791859  335494 pause.go:52] kubelet running: false
	I1119 02:44:54.791908  335494 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1119 02:44:54.955300  335494 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1119 02:44:54.955418  335494 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1119 02:44:55.034692  335494 cri.go:89] found id: "c8385e0cc96e047d9f413c86fd578dff340203a9b9a1ef224657235179328132"
	I1119 02:44:55.034722  335494 cri.go:89] found id: "eeac877f14e22971c9a442dc5730f94bf48becadd83cf5c234243e980bc2e2dd"
	I1119 02:44:55.034728  335494 cri.go:89] found id: "f7ef5557ab210d9505a17d76636e0b17ddac6b55834fdc5d6452172261f6d65e"
	I1119 02:44:55.034734  335494 cri.go:89] found id: "1a72617903e6314079ce5b0564a60457c183f68e6e318bd4b089000d5050df80"
	I1119 02:44:55.034737  335494 cri.go:89] found id: "84cc1d377e54ec20e1b335bb3f2c1a89459c779c9092721db31a065e74db7d72"
	I1119 02:44:55.034742  335494 cri.go:89] found id: "0850d32773d1729f97e0f3baf42d1b3638a7327abc66f584efafbdaa4334a283"
	I1119 02:44:55.034746  335494 cri.go:89] found id: "299bbab984622e99c9bf240099fd1891299f48da807c2b0ab1553ad4885d7c13"
	I1119 02:44:55.034751  335494 cri.go:89] found id: "7cdb91f63703193832fa8fc84ec766b4d87e2ac3e24887dcbcb074dfdac9634d"
	I1119 02:44:55.034756  335494 cri.go:89] found id: "f308d3728814cf13897a458da3b827483ae71b6a4cf2cb0fd38e141e14586a3e"
	I1119 02:44:55.034764  335494 cri.go:89] found id: "bcec9b49dd5da59561dcca13fbccd1ddc4abfb2a5907b4002476e334cc41c669"
	I1119 02:44:55.034772  335494 cri.go:89] found id: "85d4c9fe32d3e2d0a6c2edf0b44354e46f03306212318e808dee8bf625ac5497"
	I1119 02:44:55.034776  335494 cri.go:89] found id: ""
	I1119 02:44:55.034821  335494 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 02:44:55.046307  335494 retry.go:31] will retry after 259.33531ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:44:55Z" level=error msg="open /run/runc: no such file or directory"
	I1119 02:44:55.306290  335494 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 02:44:55.343406  335494 pause.go:52] kubelet running: false
	I1119 02:44:55.343489  335494 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1119 02:44:55.563654  335494 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1119 02:44:55.563900  335494 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1119 02:44:55.657947  335494 cri.go:89] found id: "c8385e0cc96e047d9f413c86fd578dff340203a9b9a1ef224657235179328132"
	I1119 02:44:55.657987  335494 cri.go:89] found id: "eeac877f14e22971c9a442dc5730f94bf48becadd83cf5c234243e980bc2e2dd"
	I1119 02:44:55.657995  335494 cri.go:89] found id: "f7ef5557ab210d9505a17d76636e0b17ddac6b55834fdc5d6452172261f6d65e"
	I1119 02:44:55.658000  335494 cri.go:89] found id: "1a72617903e6314079ce5b0564a60457c183f68e6e318bd4b089000d5050df80"
	I1119 02:44:55.658005  335494 cri.go:89] found id: "84cc1d377e54ec20e1b335bb3f2c1a89459c779c9092721db31a065e74db7d72"
	I1119 02:44:55.658010  335494 cri.go:89] found id: "0850d32773d1729f97e0f3baf42d1b3638a7327abc66f584efafbdaa4334a283"
	I1119 02:44:55.658014  335494 cri.go:89] found id: "299bbab984622e99c9bf240099fd1891299f48da807c2b0ab1553ad4885d7c13"
	I1119 02:44:55.658019  335494 cri.go:89] found id: "7cdb91f63703193832fa8fc84ec766b4d87e2ac3e24887dcbcb074dfdac9634d"
	I1119 02:44:55.658023  335494 cri.go:89] found id: "f308d3728814cf13897a458da3b827483ae71b6a4cf2cb0fd38e141e14586a3e"
	I1119 02:44:55.658052  335494 cri.go:89] found id: "bcec9b49dd5da59561dcca13fbccd1ddc4abfb2a5907b4002476e334cc41c669"
	I1119 02:44:55.658062  335494 cri.go:89] found id: "85d4c9fe32d3e2d0a6c2edf0b44354e46f03306212318e808dee8bf625ac5497"
	I1119 02:44:55.658066  335494 cri.go:89] found id: ""
	I1119 02:44:55.658111  335494 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 02:44:55.676212  335494 out.go:203] 
	W1119 02:44:55.677943  335494 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:44:55Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:44:55Z" level=error msg="open /run/runc: no such file or directory"
	
	W1119 02:44:55.677967  335494 out.go:285] * 
	* 
	W1119 02:44:55.683491  335494 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1119 02:44:55.685476  335494 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p default-k8s-diff-port-167150 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-167150
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-167150:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "eba2f66817ceb9445f5247c309c2690fcb320f53eb011c42a1ea1cd06c438d62",
	        "Created": "2025-11-19T02:42:49.168084052Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 322071,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T02:43:54.569776656Z",
	            "FinishedAt": "2025-11-19T02:43:53.705001054Z"
	        },
	        "Image": "sha256:a368e3d71517ce17114afb6c9921965419df972dd0e2d32a9973a8946f0910a3",
	        "ResolvConfPath": "/var/lib/docker/containers/eba2f66817ceb9445f5247c309c2690fcb320f53eb011c42a1ea1cd06c438d62/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/eba2f66817ceb9445f5247c309c2690fcb320f53eb011c42a1ea1cd06c438d62/hostname",
	        "HostsPath": "/var/lib/docker/containers/eba2f66817ceb9445f5247c309c2690fcb320f53eb011c42a1ea1cd06c438d62/hosts",
	        "LogPath": "/var/lib/docker/containers/eba2f66817ceb9445f5247c309c2690fcb320f53eb011c42a1ea1cd06c438d62/eba2f66817ceb9445f5247c309c2690fcb320f53eb011c42a1ea1cd06c438d62-json.log",
	        "Name": "/default-k8s-diff-port-167150",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-167150:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-167150",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "eba2f66817ceb9445f5247c309c2690fcb320f53eb011c42a1ea1cd06c438d62",
	                "LowerDir": "/var/lib/docker/overlay2/05015802710c07cf873b6416e0594c96689d6d543f9392019be507a57324d9f4-init/diff:/var/lib/docker/overlay2/10dcbd04dd53463a9d8dc947ef22df9efa1b3a4d07707317c3869fa39d5b22a7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/05015802710c07cf873b6416e0594c96689d6d543f9392019be507a57324d9f4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/05015802710c07cf873b6416e0594c96689d6d543f9392019be507a57324d9f4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/05015802710c07cf873b6416e0594c96689d6d543f9392019be507a57324d9f4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-167150",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-167150/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-167150",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-167150",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-167150",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "a7c75d8f7228900b6b4577f09d59e758ad61b773bdaad84cf94e376da66c89e3",
	            "SandboxKey": "/var/run/docker/netns/a7c75d8f7228",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33118"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33119"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33122"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33120"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33121"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-167150": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "446e8ca4ab47bdd8748a928cc2372566c0406b83b85d73866a63b7236a1153af",
	                    "EndpointID": "6d2b808ccb785c5597aec3aa1a3d24aed925055b8551b38bb4b8e035f6985c5f",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "b6:ad:db:08:68:f4",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-167150",
	                        "eba2f66817ce"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-167150 -n default-k8s-diff-port-167150
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-167150 -n default-k8s-diff-port-167150: exit status 2 (342.453256ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-167150 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-167150 logs -n 25: (1.17691305s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ stop    │ -p old-k8s-version-987573 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-987573       │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:43 UTC │
	│ addons  │ enable metrics-server -p embed-certs-811173 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-811173           │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │                     │
	│ stop    │ -p embed-certs-811173 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-811173           │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:43 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-167150 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-167150 │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-167150 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-167150 │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:43 UTC │
	│ addons  │ enable metrics-server -p no-preload-837474 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-837474            │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │                     │
	│ addons  │ enable dashboard -p old-k8s-version-987573 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-987573       │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:43 UTC │
	│ start   │ -p old-k8s-version-987573 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-987573       │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:44 UTC │
	│ stop    │ -p no-preload-837474 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-837474            │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:43 UTC │
	│ addons  │ enable dashboard -p embed-certs-811173 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-811173           │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:43 UTC │
	│ start   │ -p embed-certs-811173 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-811173           │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:44 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-167150 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-167150 │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:43 UTC │
	│ start   │ -p default-k8s-diff-port-167150 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-167150 │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:44 UTC │
	│ addons  │ enable dashboard -p no-preload-837474 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-837474            │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:43 UTC │
	│ start   │ -p no-preload-837474 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-837474            │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:44 UTC │
	│ image   │ old-k8s-version-987573 image list --format=json                                                                                                                                                                                               │ old-k8s-version-987573       │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │ 19 Nov 25 02:44 UTC │
	│ pause   │ -p old-k8s-version-987573 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-987573       │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │                     │
	│ delete  │ -p old-k8s-version-987573                                                                                                                                                                                                                     │ old-k8s-version-987573       │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │ 19 Nov 25 02:44 UTC │
	│ delete  │ -p old-k8s-version-987573                                                                                                                                                                                                                     │ old-k8s-version-987573       │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │ 19 Nov 25 02:44 UTC │
	│ start   │ -p newest-cni-956139 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-956139            │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │                     │
	│ image   │ embed-certs-811173 image list --format=json                                                                                                                                                                                                   │ embed-certs-811173           │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │ 19 Nov 25 02:44 UTC │
	│ pause   │ -p embed-certs-811173 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-811173           │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │                     │
	│ delete  │ -p embed-certs-811173                                                                                                                                                                                                                         │ embed-certs-811173           │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │                     │
	│ image   │ default-k8s-diff-port-167150 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-167150 │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │ 19 Nov 25 02:44 UTC │
	│ pause   │ -p default-k8s-diff-port-167150 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-167150 │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 02:44:29
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 02:44:29.891671  330644 out.go:360] Setting OutFile to fd 1 ...
	I1119 02:44:29.891773  330644 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:44:29.891782  330644 out.go:374] Setting ErrFile to fd 2...
	I1119 02:44:29.891786  330644 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:44:29.892013  330644 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-11126/.minikube/bin
	I1119 02:44:29.892489  330644 out.go:368] Setting JSON to false
	I1119 02:44:29.893932  330644 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5217,"bootTime":1763515053,"procs":351,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 02:44:29.894009  330644 start.go:143] virtualization: kvm guest
	I1119 02:44:29.896106  330644 out.go:179] * [newest-cni-956139] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1119 02:44:29.897349  330644 out.go:179]   - MINIKUBE_LOCATION=21924
	I1119 02:44:29.897381  330644 notify.go:221] Checking for updates...
	I1119 02:44:29.899649  330644 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 02:44:29.900639  330644 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21924-11126/kubeconfig
	I1119 02:44:29.901703  330644 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-11126/.minikube
	I1119 02:44:29.902810  330644 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1119 02:44:29.903920  330644 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 02:44:29.905455  330644 config.go:182] Loaded profile config "default-k8s-diff-port-167150": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:44:29.905620  330644 config.go:182] Loaded profile config "embed-certs-811173": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:44:29.905735  330644 config.go:182] Loaded profile config "no-preload-837474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:44:29.905864  330644 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 02:44:29.930269  330644 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1119 02:44:29.930390  330644 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 02:44:29.990852  330644 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-19 02:44:29.980528215 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652060160 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 02:44:29.990974  330644 docker.go:319] overlay module found
	I1119 02:44:29.992680  330644 out.go:179] * Using the docker driver based on user configuration
	I1119 02:44:29.993882  330644 start.go:309] selected driver: docker
	I1119 02:44:29.993897  330644 start.go:930] validating driver "docker" against <nil>
	I1119 02:44:29.993908  330644 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 02:44:29.994485  330644 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 02:44:30.055174  330644 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-19 02:44:30.045301349 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652060160 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 02:44:30.055367  330644 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1119 02:44:30.055398  330644 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1119 02:44:30.055690  330644 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1119 02:44:30.057878  330644 out.go:179] * Using Docker driver with root privileges
	I1119 02:44:30.059068  330644 cni.go:84] Creating CNI manager for ""
	I1119 02:44:30.059130  330644 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 02:44:30.059141  330644 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1119 02:44:30.059196  330644 start.go:353] cluster config:
	{Name:newest-cni-956139 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-956139 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:44:30.060543  330644 out.go:179] * Starting "newest-cni-956139" primary control-plane node in "newest-cni-956139" cluster
	I1119 02:44:30.061681  330644 cache.go:134] Beginning downloading kic base image for docker with crio
	I1119 02:44:30.062975  330644 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1119 02:44:30.064114  330644 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 02:44:30.064143  330644 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21924-11126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1119 02:44:30.064167  330644 cache.go:65] Caching tarball of preloaded images
	I1119 02:44:30.064199  330644 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1119 02:44:30.064251  330644 preload.go:238] Found /home/jenkins/minikube-integration/21924-11126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1119 02:44:30.064266  330644 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1119 02:44:30.064364  330644 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/config.json ...
	I1119 02:44:30.064387  330644 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/config.json: {Name:mk5f6a602a7486c803f28ee981bc4fb72f30089f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:44:30.086997  330644 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1119 02:44:30.087020  330644 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1119 02:44:30.087033  330644 cache.go:243] Successfully downloaded all kic artifacts
	I1119 02:44:30.087059  330644 start.go:360] acquireMachinesLock for newest-cni-956139: {Name:mk15a132b2574a22e8a886ba5601ed901f63d00c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 02:44:30.087146  330644 start.go:364] duration metric: took 69.531µs to acquireMachinesLock for "newest-cni-956139"
	I1119 02:44:30.087169  330644 start.go:93] Provisioning new machine with config: &{Name:newest-cni-956139 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-956139 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 02:44:30.087250  330644 start.go:125] createHost starting for "" (driver="docker")
	W1119 02:44:25.920223  322722 pod_ready.go:104] pod "coredns-66bc5c9577-44bdr" is not "Ready", error: <nil>
	W1119 02:44:28.420250  322722 pod_ready.go:104] pod "coredns-66bc5c9577-44bdr" is not "Ready", error: <nil>
	W1119 02:44:30.420774  322722 pod_ready.go:104] pod "coredns-66bc5c9577-44bdr" is not "Ready", error: <nil>
	W1119 02:44:29.634283  320707 pod_ready.go:104] pod "coredns-66bc5c9577-6zqr2" is not "Ready", error: <nil>
	W1119 02:44:31.634456  320707 pod_ready.go:104] pod "coredns-66bc5c9577-6zqr2" is not "Ready", error: <nil>
	W1119 02:44:34.134853  320707 pod_ready.go:104] pod "coredns-66bc5c9577-6zqr2" is not "Ready", error: <nil>
	W1119 02:44:29.824614  321785 pod_ready.go:104] pod "coredns-66bc5c9577-bht2q" is not "Ready", error: <nil>
	W1119 02:44:31.825210  321785 pod_ready.go:104] pod "coredns-66bc5c9577-bht2q" is not "Ready", error: <nil>
	W1119 02:44:33.861933  321785 pod_ready.go:104] pod "coredns-66bc5c9577-bht2q" is not "Ready", error: <nil>
	I1119 02:44:30.090250  330644 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1119 02:44:30.090540  330644 start.go:159] libmachine.API.Create for "newest-cni-956139" (driver="docker")
	I1119 02:44:30.090580  330644 client.go:173] LocalClient.Create starting
	I1119 02:44:30.090711  330644 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem
	I1119 02:44:30.090762  330644 main.go:143] libmachine: Decoding PEM data...
	I1119 02:44:30.090788  330644 main.go:143] libmachine: Parsing certificate...
	I1119 02:44:30.090868  330644 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21924-11126/.minikube/certs/cert.pem
	I1119 02:44:30.090897  330644 main.go:143] libmachine: Decoding PEM data...
	I1119 02:44:30.090911  330644 main.go:143] libmachine: Parsing certificate...
	I1119 02:44:30.091311  330644 cli_runner.go:164] Run: docker network inspect newest-cni-956139 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1119 02:44:30.108825  330644 cli_runner.go:211] docker network inspect newest-cni-956139 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1119 02:44:30.108874  330644 network_create.go:284] running [docker network inspect newest-cni-956139] to gather additional debugging logs...
	I1119 02:44:30.108888  330644 cli_runner.go:164] Run: docker network inspect newest-cni-956139
	W1119 02:44:30.125848  330644 cli_runner.go:211] docker network inspect newest-cni-956139 returned with exit code 1
	I1119 02:44:30.125873  330644 network_create.go:287] error running [docker network inspect newest-cni-956139]: docker network inspect newest-cni-956139: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-956139 not found
	I1119 02:44:30.125887  330644 network_create.go:289] output of [docker network inspect newest-cni-956139]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-956139 not found
	
	** /stderr **
	I1119 02:44:30.126008  330644 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 02:44:30.145372  330644 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-84ce244e4c23 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:16:55:7c:db:e3:4e} reservation:<nil>}
	I1119 02:44:30.146006  330644 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-70e7d73f86d8 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:8a:64:3f:46:8e:7a} reservation:<nil>}
	I1119 02:44:30.146778  330644 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-d7ef477b5a23 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:fa:eb:22:b3:62:92} reservation:<nil>}
	I1119 02:44:30.147612  330644 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ee2320}
	I1119 02:44:30.147633  330644 network_create.go:124] attempt to create docker network newest-cni-956139 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1119 02:44:30.147689  330644 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-956139 newest-cni-956139
	I1119 02:44:30.194747  330644 network_create.go:108] docker network newest-cni-956139 192.168.76.0/24 created
	I1119 02:44:30.194772  330644 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-956139" container
	I1119 02:44:30.194838  330644 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1119 02:44:30.212175  330644 cli_runner.go:164] Run: docker volume create newest-cni-956139 --label name.minikube.sigs.k8s.io=newest-cni-956139 --label created_by.minikube.sigs.k8s.io=true
	I1119 02:44:30.229588  330644 oci.go:103] Successfully created a docker volume newest-cni-956139
	I1119 02:44:30.229664  330644 cli_runner.go:164] Run: docker run --rm --name newest-cni-956139-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-956139 --entrypoint /usr/bin/test -v newest-cni-956139:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -d /var/lib
	I1119 02:44:30.612069  330644 oci.go:107] Successfully prepared a docker volume newest-cni-956139
	I1119 02:44:30.612124  330644 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 02:44:30.612132  330644 kic.go:194] Starting extracting preloaded images to volume ...
	I1119 02:44:30.612187  330644 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21924-11126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-956139:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir
	W1119 02:44:32.919409  322722 pod_ready.go:104] pod "coredns-66bc5c9577-44bdr" is not "Ready", error: <nil>
	W1119 02:44:34.920166  322722 pod_ready.go:104] pod "coredns-66bc5c9577-44bdr" is not "Ready", error: <nil>
	I1119 02:44:34.646141  320707 pod_ready.go:94] pod "coredns-66bc5c9577-6zqr2" is "Ready"
	I1119 02:44:34.646170  320707 pod_ready.go:86] duration metric: took 35.016957338s for pod "coredns-66bc5c9577-6zqr2" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:34.648819  320707 pod_ready.go:83] waiting for pod "etcd-embed-certs-811173" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:34.831828  320707 pod_ready.go:94] pod "etcd-embed-certs-811173" is "Ready"
	I1119 02:44:34.831852  320707 pod_ready.go:86] duration metric: took 183.006168ms for pod "etcd-embed-certs-811173" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:34.834239  320707 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-811173" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:34.837643  320707 pod_ready.go:94] pod "kube-apiserver-embed-certs-811173" is "Ready"
	I1119 02:44:34.837663  320707 pod_ready.go:86] duration metric: took 3.400351ms for pod "kube-apiserver-embed-certs-811173" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:34.839329  320707 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-811173" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:34.842652  320707 pod_ready.go:94] pod "kube-controller-manager-embed-certs-811173" is "Ready"
	I1119 02:44:34.842670  320707 pod_ready.go:86] duration metric: took 3.319388ms for pod "kube-controller-manager-embed-certs-811173" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:35.032627  320707 pod_ready.go:83] waiting for pod "kube-proxy-s5bzz" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:35.432934  320707 pod_ready.go:94] pod "kube-proxy-s5bzz" is "Ready"
	I1119 02:44:35.432959  320707 pod_ready.go:86] duration metric: took 400.306652ms for pod "kube-proxy-s5bzz" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:35.633961  320707 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-811173" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:36.032469  320707 pod_ready.go:94] pod "kube-scheduler-embed-certs-811173" is "Ready"
	I1119 02:44:36.032499  320707 pod_ready.go:86] duration metric: took 398.480495ms for pod "kube-scheduler-embed-certs-811173" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:36.032511  320707 pod_ready.go:40] duration metric: took 36.406499301s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 02:44:36.080404  320707 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1119 02:44:36.082160  320707 out.go:179] * Done! kubectl is now configured to use "embed-certs-811173" cluster and "default" namespace by default
	I1119 02:44:34.960079  330644 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21924-11126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-956139:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir: (4.347852696s)
	I1119 02:44:34.960108  330644 kic.go:203] duration metric: took 4.347972861s to extract preloaded images to volume ...
	W1119 02:44:34.960206  330644 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1119 02:44:34.960254  330644 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1119 02:44:34.960300  330644 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1119 02:44:35.014083  330644 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-956139 --name newest-cni-956139 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-956139 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-956139 --network newest-cni-956139 --ip 192.168.76.2 --volume newest-cni-956139:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a
	I1119 02:44:35.325493  330644 cli_runner.go:164] Run: docker container inspect newest-cni-956139 --format={{.State.Running}}
	I1119 02:44:35.343669  330644 cli_runner.go:164] Run: docker container inspect newest-cni-956139 --format={{.State.Status}}
	I1119 02:44:35.361759  330644 cli_runner.go:164] Run: docker exec newest-cni-956139 stat /var/lib/dpkg/alternatives/iptables
	I1119 02:44:35.406925  330644 oci.go:144] the created container "newest-cni-956139" has a running status.
	I1119 02:44:35.406959  330644 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21924-11126/.minikube/machines/newest-cni-956139/id_rsa...
	I1119 02:44:35.779267  330644 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21924-11126/.minikube/machines/newest-cni-956139/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1119 02:44:35.805615  330644 cli_runner.go:164] Run: docker container inspect newest-cni-956139 --format={{.State.Status}}
	I1119 02:44:35.826512  330644 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1119 02:44:35.826530  330644 kic_runner.go:114] Args: [docker exec --privileged newest-cni-956139 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1119 02:44:35.871319  330644 cli_runner.go:164] Run: docker container inspect newest-cni-956139 --format={{.State.Status}}
	I1119 02:44:35.889991  330644 machine.go:94] provisionDockerMachine start ...
	I1119 02:44:35.890097  330644 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956139
	I1119 02:44:35.909789  330644 main.go:143] libmachine: Using SSH client type: native
	I1119 02:44:35.910136  330644 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1119 02:44:35.910158  330644 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 02:44:36.043778  330644 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-956139
	
	I1119 02:44:36.043805  330644 ubuntu.go:182] provisioning hostname "newest-cni-956139"
	I1119 02:44:36.043885  330644 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956139
	I1119 02:44:36.065697  330644 main.go:143] libmachine: Using SSH client type: native
	I1119 02:44:36.065904  330644 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1119 02:44:36.065918  330644 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-956139 && echo "newest-cni-956139" | sudo tee /etc/hostname
	I1119 02:44:36.211004  330644 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-956139
	
	I1119 02:44:36.211088  330644 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956139
	I1119 02:44:36.229392  330644 main.go:143] libmachine: Using SSH client type: native
	I1119 02:44:36.229616  330644 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1119 02:44:36.229635  330644 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-956139' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-956139/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-956139' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 02:44:36.359138  330644 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 02:44:36.359177  330644 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21924-11126/.minikube CaCertPath:/home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21924-11126/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21924-11126/.minikube}
	I1119 02:44:36.359210  330644 ubuntu.go:190] setting up certificates
	I1119 02:44:36.359219  330644 provision.go:84] configureAuth start
	I1119 02:44:36.359262  330644 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-956139
	I1119 02:44:36.381048  330644 provision.go:143] copyHostCerts
	I1119 02:44:36.381118  330644 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-11126/.minikube/key.pem, removing ...
	I1119 02:44:36.381134  330644 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-11126/.minikube/key.pem
	I1119 02:44:36.381241  330644 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21924-11126/.minikube/key.pem (1675 bytes)
	I1119 02:44:36.381393  330644 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-11126/.minikube/ca.pem, removing ...
	I1119 02:44:36.381407  330644 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-11126/.minikube/ca.pem
	I1119 02:44:36.381473  330644 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21924-11126/.minikube/ca.pem (1082 bytes)
	I1119 02:44:36.381598  330644 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-11126/.minikube/cert.pem, removing ...
	I1119 02:44:36.381613  330644 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-11126/.minikube/cert.pem
	I1119 02:44:36.381659  330644 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21924-11126/.minikube/cert.pem (1123 bytes)
	I1119 02:44:36.381762  330644 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21924-11126/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca-key.pem org=jenkins.newest-cni-956139 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-956139]
	I1119 02:44:36.425094  330644 provision.go:177] copyRemoteCerts
	I1119 02:44:36.425145  330644 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 02:44:36.425178  330644 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956139
	I1119 02:44:36.444152  330644 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/newest-cni-956139/id_rsa Username:docker}
	I1119 02:44:36.542494  330644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1119 02:44:36.560963  330644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1119 02:44:36.577617  330644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1119 02:44:36.594302  330644 provision.go:87] duration metric: took 235.073311ms to configureAuth
	I1119 02:44:36.594322  330644 ubuntu.go:206] setting minikube options for container-runtime
	I1119 02:44:36.594527  330644 config.go:182] Loaded profile config "newest-cni-956139": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:44:36.594625  330644 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956139
	I1119 02:44:36.612019  330644 main.go:143] libmachine: Using SSH client type: native
	I1119 02:44:36.612218  330644 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1119 02:44:36.612232  330644 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 02:44:36.879790  330644 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 02:44:36.879819  330644 machine.go:97] duration metric: took 989.804229ms to provisionDockerMachine
	I1119 02:44:36.879830  330644 client.go:176] duration metric: took 6.789240603s to LocalClient.Create
	I1119 02:44:36.879851  330644 start.go:167] duration metric: took 6.789312626s to libmachine.API.Create "newest-cni-956139"
	I1119 02:44:36.879860  330644 start.go:293] postStartSetup for "newest-cni-956139" (driver="docker")
	I1119 02:44:36.879872  330644 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 02:44:36.879933  330644 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 02:44:36.879968  330644 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956139
	I1119 02:44:36.898156  330644 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/newest-cni-956139/id_rsa Username:docker}
	I1119 02:44:36.993744  330644 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 02:44:36.997203  330644 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 02:44:36.997235  330644 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 02:44:36.997254  330644 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-11126/.minikube/addons for local assets ...
	I1119 02:44:36.997312  330644 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-11126/.minikube/files for local assets ...
	I1119 02:44:36.997404  330644 filesync.go:149] local asset: /home/jenkins/minikube-integration/21924-11126/.minikube/files/etc/ssl/certs/146342.pem -> 146342.pem in /etc/ssl/certs
	I1119 02:44:36.997536  330644 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 02:44:37.005305  330644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/files/etc/ssl/certs/146342.pem --> /etc/ssl/certs/146342.pem (1708 bytes)
	I1119 02:44:37.024142  330644 start.go:296] duration metric: took 144.272497ms for postStartSetup
	I1119 02:44:37.024490  330644 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-956139
	I1119 02:44:37.042142  330644 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/config.json ...
	I1119 02:44:37.042364  330644 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 02:44:37.042421  330644 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956139
	I1119 02:44:37.060279  330644 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/newest-cni-956139/id_rsa Username:docker}
	I1119 02:44:37.151155  330644 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 02:44:37.155487  330644 start.go:128] duration metric: took 7.068223226s to createHost
	I1119 02:44:37.155509  330644 start.go:83] releasing machines lock for "newest-cni-956139", held for 7.068353821s
	I1119 02:44:37.155567  330644 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-956139
	I1119 02:44:37.172738  330644 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 02:44:37.172750  330644 ssh_runner.go:195] Run: cat /version.json
	I1119 02:44:37.172802  330644 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956139
	I1119 02:44:37.172817  330644 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956139
	I1119 02:44:37.191403  330644 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/newest-cni-956139/id_rsa Username:docker}
	I1119 02:44:37.191761  330644 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/newest-cni-956139/id_rsa Username:docker}
	I1119 02:44:37.349781  330644 ssh_runner.go:195] Run: systemctl --version
	I1119 02:44:37.356447  330644 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 02:44:37.390971  330644 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 02:44:37.395386  330644 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 02:44:37.395452  330644 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 02:44:37.420966  330644 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1119 02:44:37.421000  330644 start.go:496] detecting cgroup driver to use...
	I1119 02:44:37.421031  330644 detect.go:190] detected "systemd" cgroup driver on host os
	I1119 02:44:37.421116  330644 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 02:44:37.437016  330644 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 02:44:37.448636  330644 docker.go:218] disabling cri-docker service (if available) ...
	I1119 02:44:37.448680  330644 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 02:44:37.464103  330644 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 02:44:37.483229  330644 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 02:44:37.569719  330644 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 02:44:37.663891  330644 docker.go:234] disabling docker service ...
	I1119 02:44:37.663946  330644 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 02:44:37.684672  330644 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 02:44:37.699707  330644 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 02:44:37.783938  330644 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 02:44:37.866466  330644 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 02:44:37.878906  330644 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 02:44:37.893148  330644 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 02:44:37.893200  330644 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:44:37.903765  330644 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1119 02:44:37.903825  330644 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:44:37.912380  330644 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:44:37.922240  330644 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:44:37.930944  330644 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 02:44:37.938625  330644 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:44:37.947066  330644 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:44:37.960171  330644 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:44:37.968261  330644 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 02:44:37.975267  330644 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 02:44:37.982398  330644 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:44:38.060067  330644 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 02:44:38.192960  330644 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 02:44:38.193022  330644 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 02:44:38.196763  330644 start.go:564] Will wait 60s for crictl version
	I1119 02:44:38.196824  330644 ssh_runner.go:195] Run: which crictl
	I1119 02:44:38.200161  330644 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 02:44:38.225001  330644 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1119 02:44:38.225065  330644 ssh_runner.go:195] Run: crio --version
	I1119 02:44:38.251944  330644 ssh_runner.go:195] Run: crio --version
	I1119 02:44:38.282138  330644 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1119 02:44:38.283487  330644 cli_runner.go:164] Run: docker network inspect newest-cni-956139 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 02:44:38.300312  330644 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1119 02:44:38.304280  330644 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 02:44:38.315573  330644 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1119 02:44:36.325065  321785 pod_ready.go:104] pod "coredns-66bc5c9577-bht2q" is not "Ready", error: <nil>
	W1119 02:44:38.824893  321785 pod_ready.go:104] pod "coredns-66bc5c9577-bht2q" is not "Ready", error: <nil>
	I1119 02:44:38.316650  330644 kubeadm.go:884] updating cluster {Name:newest-cni-956139 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-956139 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 02:44:38.316772  330644 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 02:44:38.316823  330644 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 02:44:38.347925  330644 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 02:44:38.347943  330644 crio.go:433] Images already preloaded, skipping extraction
	I1119 02:44:38.348024  330644 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 02:44:38.371370  330644 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 02:44:38.371386  330644 cache_images.go:86] Images are preloaded, skipping loading
	I1119 02:44:38.371393  330644 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1119 02:44:38.371489  330644 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-956139 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-956139 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 02:44:38.371568  330644 ssh_runner.go:195] Run: crio config
	I1119 02:44:38.414403  330644 cni.go:84] Creating CNI manager for ""
	I1119 02:44:38.414425  330644 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 02:44:38.414455  330644 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1119 02:44:38.414480  330644 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-956139 NodeName:newest-cni-956139 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 02:44:38.414596  330644 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-956139"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 02:44:38.414650  330644 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 02:44:38.422980  330644 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 02:44:38.423037  330644 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 02:44:38.430764  330644 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1119 02:44:38.442899  330644 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 02:44:38.457503  330644 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1119 02:44:38.470194  330644 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1119 02:44:38.473583  330644 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 02:44:38.482869  330644 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:44:38.562300  330644 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 02:44:38.585622  330644 certs.go:69] Setting up /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139 for IP: 192.168.76.2
	I1119 02:44:38.585639  330644 certs.go:195] generating shared ca certs ...
	I1119 02:44:38.585658  330644 certs.go:227] acquiring lock for ca certs: {Name:mkfa94aafd627ee4c1a185e05aa520339d3c22d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:44:38.585812  330644 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21924-11126/.minikube/ca.key
	I1119 02:44:38.585880  330644 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21924-11126/.minikube/proxy-client-ca.key
	I1119 02:44:38.585900  330644 certs.go:257] generating profile certs ...
	I1119 02:44:38.585973  330644 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/client.key
	I1119 02:44:38.585994  330644 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/client.crt with IP's: []
	I1119 02:44:38.886736  330644 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/client.crt ...
	I1119 02:44:38.886761  330644 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/client.crt: {Name:mkb981b48727217d5d544f8c1ece639a24196b8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:44:38.886914  330644 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/client.key ...
	I1119 02:44:38.886927  330644 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/client.key: {Name:mkf09d335927b94ecd83db709f24055ce131f9c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:44:38.887002  330644 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/apiserver.key.107d1e6d
	I1119 02:44:38.887016  330644 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/apiserver.crt.107d1e6d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1119 02:44:39.078031  330644 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/apiserver.crt.107d1e6d ...
	I1119 02:44:39.078059  330644 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/apiserver.crt.107d1e6d: {Name:mkcff50d0bd0e5de553650f0790abc33df1f3d40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:44:39.078203  330644 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/apiserver.key.107d1e6d ...
	I1119 02:44:39.078217  330644 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/apiserver.key.107d1e6d: {Name:mk332d91d4c4926805e4ae3abcbd91571604bef5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:44:39.078314  330644 certs.go:382] copying /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/apiserver.crt.107d1e6d -> /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/apiserver.crt
	I1119 02:44:39.078410  330644 certs.go:386] copying /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/apiserver.key.107d1e6d -> /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/apiserver.key
	I1119 02:44:39.078500  330644 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/proxy-client.key
	I1119 02:44:39.078517  330644 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/proxy-client.crt with IP's: []
	I1119 02:44:39.492473  330644 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/proxy-client.crt ...
	I1119 02:44:39.492501  330644 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/proxy-client.crt: {Name:mk2d2a0752005ddbf3ff7866b2d888f6c88921c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:44:39.492685  330644 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/proxy-client.key ...
	I1119 02:44:39.492708  330644 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/proxy-client.key: {Name:mk0676b22a9381558c3b1f8b4d9f9ded76cf6a58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:44:39.492943  330644 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/14634.pem (1338 bytes)
	W1119 02:44:39.492986  330644 certs.go:480] ignoring /home/jenkins/minikube-integration/21924-11126/.minikube/certs/14634_empty.pem, impossibly tiny 0 bytes
	I1119 02:44:39.493002  330644 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca-key.pem (1671 bytes)
	I1119 02:44:39.493035  330644 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem (1082 bytes)
	I1119 02:44:39.493063  330644 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/cert.pem (1123 bytes)
	I1119 02:44:39.493096  330644 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/key.pem (1675 bytes)
	I1119 02:44:39.493152  330644 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/files/etc/ssl/certs/146342.pem (1708 bytes)
	I1119 02:44:39.493921  330644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 02:44:39.511675  330644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1119 02:44:39.528321  330644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 02:44:39.545416  330644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1119 02:44:39.561752  330644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1119 02:44:39.578259  330644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1119 02:44:39.594332  330644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 02:44:39.610201  330644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1119 02:44:39.626532  330644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/files/etc/ssl/certs/146342.pem --> /usr/share/ca-certificates/146342.pem (1708 bytes)
	I1119 02:44:39.646920  330644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 02:44:39.663725  330644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/certs/14634.pem --> /usr/share/ca-certificates/14634.pem (1338 bytes)
	I1119 02:44:39.680824  330644 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 02:44:39.692613  330644 ssh_runner.go:195] Run: openssl version
	I1119 02:44:39.699229  330644 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/146342.pem && ln -fs /usr/share/ca-certificates/146342.pem /etc/ssl/certs/146342.pem"
	I1119 02:44:39.708084  330644 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/146342.pem
	I1119 02:44:39.711716  330644 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 02:02 /usr/share/ca-certificates/146342.pem
	I1119 02:44:39.711771  330644 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/146342.pem
	I1119 02:44:39.746645  330644 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/146342.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 02:44:39.754713  330644 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 02:44:39.762929  330644 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:44:39.766299  330644 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 01:56 /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:44:39.766335  330644 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:44:39.800570  330644 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 02:44:39.808541  330644 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14634.pem && ln -fs /usr/share/ca-certificates/14634.pem /etc/ssl/certs/14634.pem"
	I1119 02:44:39.816270  330644 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14634.pem
	I1119 02:44:39.819952  330644 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 02:02 /usr/share/ca-certificates/14634.pem
	I1119 02:44:39.819989  330644 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14634.pem
	I1119 02:44:39.854738  330644 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14634.pem /etc/ssl/certs/51391683.0"
	I1119 02:44:39.863275  330644 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 02:44:39.866811  330644 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1119 02:44:39.866866  330644 kubeadm.go:401] StartCluster: {Name:newest-cni-956139 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-956139 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:44:39.866959  330644 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 02:44:39.867032  330644 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 02:44:39.893234  330644 cri.go:89] found id: ""
	I1119 02:44:39.893298  330644 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 02:44:39.901084  330644 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1119 02:44:39.908779  330644 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1119 02:44:39.908820  330644 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1119 02:44:39.915918  330644 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1119 02:44:39.915956  330644 kubeadm.go:158] found existing configuration files:
	
	I1119 02:44:39.916000  330644 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1119 02:44:39.924150  330644 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1119 02:44:39.924192  330644 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1119 02:44:39.931134  330644 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1119 02:44:39.938135  330644 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1119 02:44:39.938182  330644 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1119 02:44:39.945082  330644 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1119 02:44:39.952377  330644 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1119 02:44:39.952425  330644 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1119 02:44:39.959861  330644 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1119 02:44:39.966757  330644 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1119 02:44:39.966801  330644 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1119 02:44:39.973926  330644 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1119 02:44:40.012094  330644 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1119 02:44:40.012170  330644 kubeadm.go:319] [preflight] Running pre-flight checks
	I1119 02:44:40.051599  330644 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1119 02:44:40.051753  330644 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1119 02:44:40.051826  330644 kubeadm.go:319] OS: Linux
	I1119 02:44:40.051888  330644 kubeadm.go:319] CGROUPS_CPU: enabled
	I1119 02:44:40.051939  330644 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1119 02:44:40.052007  330644 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1119 02:44:40.052083  330644 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1119 02:44:40.052163  330644 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1119 02:44:40.052233  330644 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1119 02:44:40.052284  330644 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1119 02:44:40.052344  330644 kubeadm.go:319] CGROUPS_IO: enabled
	I1119 02:44:40.110629  330644 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1119 02:44:40.110786  330644 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1119 02:44:40.110919  330644 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1119 02:44:40.118761  330644 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1119 02:44:37.420903  322722 pod_ready.go:104] pod "coredns-66bc5c9577-44bdr" is not "Ready", error: <nil>
	W1119 02:44:39.920505  322722 pod_ready.go:104] pod "coredns-66bc5c9577-44bdr" is not "Ready", error: <nil>
	I1119 02:44:40.823992  321785 pod_ready.go:94] pod "coredns-66bc5c9577-bht2q" is "Ready"
	I1119 02:44:40.824024  321785 pod_ready.go:86] duration metric: took 34.00468535s for pod "coredns-66bc5c9577-bht2q" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:40.826065  321785 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-167150" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:40.829510  321785 pod_ready.go:94] pod "etcd-default-k8s-diff-port-167150" is "Ready"
	I1119 02:44:40.829533  321785 pod_ready.go:86] duration metric: took 3.445845ms for pod "etcd-default-k8s-diff-port-167150" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:40.831135  321785 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-167150" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:40.834490  321785 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-167150" is "Ready"
	I1119 02:44:40.834508  321785 pod_ready.go:86] duration metric: took 3.353905ms for pod "kube-apiserver-default-k8s-diff-port-167150" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:40.836222  321785 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-167150" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:41.022776  321785 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-167150" is "Ready"
	I1119 02:44:41.022802  321785 pod_ready.go:86] duration metric: took 186.560827ms for pod "kube-controller-manager-default-k8s-diff-port-167150" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:41.222650  321785 pod_ready.go:83] waiting for pod "kube-proxy-8gl4n" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:41.623243  321785 pod_ready.go:94] pod "kube-proxy-8gl4n" is "Ready"
	I1119 02:44:41.623276  321785 pod_ready.go:86] duration metric: took 400.60046ms for pod "kube-proxy-8gl4n" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:41.823313  321785 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-167150" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:42.222639  321785 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-167150" is "Ready"
	I1119 02:44:42.222665  321785 pod_ready.go:86] duration metric: took 399.326737ms for pod "kube-scheduler-default-k8s-diff-port-167150" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:42.222675  321785 pod_ready.go:40] duration metric: took 35.410146964s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 02:44:42.265461  321785 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1119 02:44:42.267962  321785 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-167150" cluster and "default" namespace by default
	I1119 02:44:40.120572  330644 out.go:252]   - Generating certificates and keys ...
	I1119 02:44:40.120676  330644 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1119 02:44:40.120767  330644 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1119 02:44:40.285783  330644 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1119 02:44:40.596128  330644 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1119 02:44:40.775594  330644 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1119 02:44:40.856728  330644 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1119 02:44:41.447992  330644 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1119 02:44:41.448141  330644 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-956139] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1119 02:44:42.120936  330644 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1119 02:44:42.121139  330644 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-956139] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1119 02:44:42.400506  330644 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1119 02:44:42.544344  330644 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1119 02:44:42.820587  330644 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1119 02:44:42.820689  330644 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1119 02:44:42.995265  330644 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1119 02:44:43.162291  330644 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1119 02:44:43.196763  330644 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1119 02:44:43.556128  330644 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1119 02:44:43.787728  330644 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1119 02:44:43.788303  330644 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1119 02:44:43.792218  330644 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1119 02:44:43.793609  330644 out.go:252]   - Booting up control plane ...
	I1119 02:44:43.793714  330644 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1119 02:44:43.793818  330644 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1119 02:44:43.794447  330644 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1119 02:44:43.811365  330644 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1119 02:44:43.811606  330644 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1119 02:44:43.817701  330644 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1119 02:44:43.818010  330644 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1119 02:44:43.818083  330644 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1119 02:44:43.912675  330644 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1119 02:44:43.912849  330644 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	W1119 02:44:42.419894  322722 pod_ready.go:104] pod "coredns-66bc5c9577-44bdr" is not "Ready", error: <nil>
	W1119 02:44:44.921381  322722 pod_ready.go:104] pod "coredns-66bc5c9577-44bdr" is not "Ready", error: <nil>
	I1119 02:44:46.419827  322722 pod_ready.go:94] pod "coredns-66bc5c9577-44bdr" is "Ready"
	I1119 02:44:46.419857  322722 pod_ready.go:86] duration metric: took 38.00494675s for pod "coredns-66bc5c9577-44bdr" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:46.422128  322722 pod_ready.go:83] waiting for pod "etcd-no-preload-837474" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:46.425877  322722 pod_ready.go:94] pod "etcd-no-preload-837474" is "Ready"
	I1119 02:44:46.425901  322722 pod_ready.go:86] duration metric: took 3.744715ms for pod "etcd-no-preload-837474" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:46.427596  322722 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-837474" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:46.430915  322722 pod_ready.go:94] pod "kube-apiserver-no-preload-837474" is "Ready"
	I1119 02:44:46.430936  322722 pod_ready.go:86] duration metric: took 3.318971ms for pod "kube-apiserver-no-preload-837474" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:46.432827  322722 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-837474" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:46.619267  322722 pod_ready.go:94] pod "kube-controller-manager-no-preload-837474" is "Ready"
	I1119 02:44:46.619298  322722 pod_ready.go:86] duration metric: took 186.448054ms for pod "kube-controller-manager-no-preload-837474" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:46.819349  322722 pod_ready.go:83] waiting for pod "kube-proxy-hmxzk" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:47.219089  322722 pod_ready.go:94] pod "kube-proxy-hmxzk" is "Ready"
	I1119 02:44:47.219115  322722 pod_ready.go:86] duration metric: took 399.745795ms for pod "kube-proxy-hmxzk" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:47.418899  322722 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-837474" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:47.819293  322722 pod_ready.go:94] pod "kube-scheduler-no-preload-837474" is "Ready"
	I1119 02:44:47.819318  322722 pod_ready.go:86] duration metric: took 400.396392ms for pod "kube-scheduler-no-preload-837474" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:47.819332  322722 pod_ready.go:40] duration metric: took 39.409998426s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 02:44:47.882918  322722 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1119 02:44:47.884667  322722 out.go:179] * Done! kubectl is now configured to use "no-preload-837474" cluster and "default" namespace by default
	I1119 02:44:44.914267  330644 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001584412s
	I1119 02:44:44.919834  330644 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1119 02:44:44.919954  330644 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1119 02:44:44.920098  330644 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1119 02:44:44.920202  330644 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1119 02:44:46.082445  330644 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.162579737s
	I1119 02:44:46.762642  330644 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.842786839s
	I1119 02:44:48.421451  330644 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501654588s
	I1119 02:44:48.432989  330644 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1119 02:44:48.442965  330644 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1119 02:44:48.450246  330644 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1119 02:44:48.450564  330644 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-956139 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1119 02:44:48.457630  330644 kubeadm.go:319] [bootstrap-token] Using token: bpq1za.q7wy15mme3dprzfy
	I1119 02:44:48.458785  330644 out.go:252]   - Configuring RBAC rules ...
	I1119 02:44:48.458936  330644 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1119 02:44:48.461935  330644 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1119 02:44:48.466914  330644 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1119 02:44:48.469590  330644 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1119 02:44:48.472718  330644 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1119 02:44:48.475031  330644 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1119 02:44:48.827275  330644 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1119 02:44:49.241863  330644 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1119 02:44:49.827545  330644 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1119 02:44:49.828386  330644 kubeadm.go:319] 
	I1119 02:44:49.828472  330644 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1119 02:44:49.828485  330644 kubeadm.go:319] 
	I1119 02:44:49.828608  330644 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1119 02:44:49.828625  330644 kubeadm.go:319] 
	I1119 02:44:49.828650  330644 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1119 02:44:49.828731  330644 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1119 02:44:49.828818  330644 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1119 02:44:49.828832  330644 kubeadm.go:319] 
	I1119 02:44:49.828906  330644 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1119 02:44:49.828916  330644 kubeadm.go:319] 
	I1119 02:44:49.828980  330644 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1119 02:44:49.828990  330644 kubeadm.go:319] 
	I1119 02:44:49.829055  330644 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1119 02:44:49.829166  330644 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1119 02:44:49.829226  330644 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1119 02:44:49.829233  330644 kubeadm.go:319] 
	I1119 02:44:49.829341  330644 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1119 02:44:49.829450  330644 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1119 02:44:49.829464  330644 kubeadm.go:319] 
	I1119 02:44:49.829567  330644 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token bpq1za.q7wy15mme3dprzfy \
	I1119 02:44:49.829694  330644 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4165d7789cb5b41e2fdbecffedd9aece71a7c390b7f673c978e979d9f1b4ab32 \
	I1119 02:44:49.829727  330644 kubeadm.go:319] 	--control-plane 
	I1119 02:44:49.829737  330644 kubeadm.go:319] 
	I1119 02:44:49.829830  330644 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1119 02:44:49.829840  330644 kubeadm.go:319] 
	I1119 02:44:49.829940  330644 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token bpq1za.q7wy15mme3dprzfy \
	I1119 02:44:49.830063  330644 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4165d7789cb5b41e2fdbecffedd9aece71a7c390b7f673c978e979d9f1b4ab32 
	I1119 02:44:49.832633  330644 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1119 02:44:49.832729  330644 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1119 02:44:49.832752  330644 cni.go:84] Creating CNI manager for ""
	I1119 02:44:49.832761  330644 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 02:44:49.834994  330644 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1119 02:44:49.836244  330644 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1119 02:44:49.840560  330644 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1119 02:44:49.840576  330644 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1119 02:44:49.852577  330644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1119 02:44:50.080027  330644 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1119 02:44:50.080080  330644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:44:50.080111  330644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-956139 minikube.k8s.io/updated_at=2025_11_19T02_44_50_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=ebd49efe8b7d44f89fc96e21beffcd64dfee8277 minikube.k8s.io/name=newest-cni-956139 minikube.k8s.io/primary=true
	I1119 02:44:50.181807  330644 ops.go:34] apiserver oom_adj: -16
	I1119 02:44:50.183726  330644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:44:50.684625  330644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:44:51.184631  330644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:44:51.684630  330644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:44:52.184401  330644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:44:52.684596  330644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:44:53.183868  330644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:44:53.683849  330644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:44:54.184175  330644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:44:54.684642  330644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:44:55.184680  330644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:44:55.255079  330644 kubeadm.go:1114] duration metric: took 5.175044255s to wait for elevateKubeSystemPrivileges
	I1119 02:44:55.255111  330644 kubeadm.go:403] duration metric: took 15.388250216s to StartCluster
	I1119 02:44:55.255131  330644 settings.go:142] acquiring lock: {Name:mk39bd61177f2f8808b442f427cccc3032092975 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:44:55.255207  330644 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21924-11126/kubeconfig
	I1119 02:44:55.257307  330644 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/kubeconfig: {Name:mkb0d0ec188d51add3fa67ae624c9c11581068d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:44:55.257611  330644 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 02:44:55.257651  330644 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1119 02:44:55.257666  330644 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 02:44:55.257759  330644 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-956139"
	I1119 02:44:55.257779  330644 addons.go:70] Setting default-storageclass=true in profile "newest-cni-956139"
	I1119 02:44:55.257784  330644 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-956139"
	I1119 02:44:55.257825  330644 host.go:66] Checking if "newest-cni-956139" exists ...
	I1119 02:44:55.257829  330644 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-956139"
	I1119 02:44:55.257852  330644 config.go:182] Loaded profile config "newest-cni-956139": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:44:55.258176  330644 cli_runner.go:164] Run: docker container inspect newest-cni-956139 --format={{.State.Status}}
	I1119 02:44:55.258487  330644 cli_runner.go:164] Run: docker container inspect newest-cni-956139 --format={{.State.Status}}
	I1119 02:44:55.259164  330644 out.go:179] * Verifying Kubernetes components...
	I1119 02:44:55.261074  330644 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:44:55.287881  330644 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 02:44:55.288607  330644 addons.go:239] Setting addon default-storageclass=true in "newest-cni-956139"
	I1119 02:44:55.288655  330644 host.go:66] Checking if "newest-cni-956139" exists ...
	I1119 02:44:55.288995  330644 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 02:44:55.289013  330644 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 02:44:55.289063  330644 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956139
	I1119 02:44:55.289112  330644 cli_runner.go:164] Run: docker container inspect newest-cni-956139 --format={{.State.Status}}
	I1119 02:44:55.320315  330644 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 02:44:55.320506  330644 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 02:44:55.320689  330644 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956139
	I1119 02:44:55.327680  330644 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/newest-cni-956139/id_rsa Username:docker}
	I1119 02:44:55.349806  330644 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/newest-cni-956139/id_rsa Username:docker}
	I1119 02:44:55.379730  330644 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1119 02:44:55.451250  330644 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 02:44:55.457636  330644 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 02:44:55.480367  330644 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 02:44:55.612641  330644 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1119 02:44:55.614898  330644 api_server.go:52] waiting for apiserver process to appear ...
	I1119 02:44:55.614959  330644 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 02:44:55.811014  330644 api_server.go:72] duration metric: took 553.367498ms to wait for apiserver process to appear ...
	I1119 02:44:55.811040  330644 api_server.go:88] waiting for apiserver healthz status ...
	I1119 02:44:55.811059  330644 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 02:44:55.819776  330644 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1119 02:44:55.820578  330644 api_server.go:141] control plane version: v1.34.1
	I1119 02:44:55.820609  330644 api_server.go:131] duration metric: took 9.561354ms to wait for apiserver health ...
	I1119 02:44:55.820618  330644 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 02:44:55.822755  330644 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	
	
	==> CRI-O <==
	Nov 19 02:44:36 default-k8s-diff-port-167150 crio[564]: time="2025-11-19T02:44:36.469913056Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:44:36 default-k8s-diff-port-167150 crio[564]: time="2025-11-19T02:44:36.470101579Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/fffbca222c4405b893aec800a0b00d976f3440b90097f95695f841cc4df0f2a1/merged/etc/passwd: no such file or directory"
	Nov 19 02:44:36 default-k8s-diff-port-167150 crio[564]: time="2025-11-19T02:44:36.470135294Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/fffbca222c4405b893aec800a0b00d976f3440b90097f95695f841cc4df0f2a1/merged/etc/group: no such file or directory"
	Nov 19 02:44:36 default-k8s-diff-port-167150 crio[564]: time="2025-11-19T02:44:36.470419165Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:44:36 default-k8s-diff-port-167150 crio[564]: time="2025-11-19T02:44:36.497349934Z" level=info msg="Created container c8385e0cc96e047d9f413c86fd578dff340203a9b9a1ef224657235179328132: kube-system/storage-provisioner/storage-provisioner" id=e92c2200-c59f-4003-bcf3-51d0a15ad8f2 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 02:44:36 default-k8s-diff-port-167150 crio[564]: time="2025-11-19T02:44:36.49792222Z" level=info msg="Starting container: c8385e0cc96e047d9f413c86fd578dff340203a9b9a1ef224657235179328132" id=a5e87abb-c75b-4308-a6e9-ee7a96f89271 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 02:44:36 default-k8s-diff-port-167150 crio[564]: time="2025-11-19T02:44:36.499532389Z" level=info msg="Started container" PID=1704 containerID=c8385e0cc96e047d9f413c86fd578dff340203a9b9a1ef224657235179328132 description=kube-system/storage-provisioner/storage-provisioner id=a5e87abb-c75b-4308-a6e9-ee7a96f89271 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4da141227582a4cfa315aeb58b3d879fe26621fc1365da0d5be8196d50ec810f
	Nov 19 02:44:46 default-k8s-diff-port-167150 crio[564]: time="2025-11-19T02:44:46.307987267Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 02:44:46 default-k8s-diff-port-167150 crio[564]: time="2025-11-19T02:44:46.311964245Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 02:44:46 default-k8s-diff-port-167150 crio[564]: time="2025-11-19T02:44:46.311996111Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 19 02:44:46 default-k8s-diff-port-167150 crio[564]: time="2025-11-19T02:44:46.312020786Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 02:44:46 default-k8s-diff-port-167150 crio[564]: time="2025-11-19T02:44:46.317363587Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 02:44:46 default-k8s-diff-port-167150 crio[564]: time="2025-11-19T02:44:46.317391327Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 19 02:44:46 default-k8s-diff-port-167150 crio[564]: time="2025-11-19T02:44:46.317411705Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 02:44:46 default-k8s-diff-port-167150 crio[564]: time="2025-11-19T02:44:46.322704128Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 02:44:46 default-k8s-diff-port-167150 crio[564]: time="2025-11-19T02:44:46.322731444Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 19 02:44:46 default-k8s-diff-port-167150 crio[564]: time="2025-11-19T02:44:46.322752513Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 02:44:46 default-k8s-diff-port-167150 crio[564]: time="2025-11-19T02:44:46.326344526Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 02:44:46 default-k8s-diff-port-167150 crio[564]: time="2025-11-19T02:44:46.32636437Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 19 02:44:46 default-k8s-diff-port-167150 crio[564]: time="2025-11-19T02:44:46.326379151Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 02:44:46 default-k8s-diff-port-167150 crio[564]: time="2025-11-19T02:44:46.329947608Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 02:44:46 default-k8s-diff-port-167150 crio[564]: time="2025-11-19T02:44:46.329970091Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 19 02:44:46 default-k8s-diff-port-167150 crio[564]: time="2025-11-19T02:44:46.329987686Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 02:44:46 default-k8s-diff-port-167150 crio[564]: time="2025-11-19T02:44:46.333182725Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 02:44:46 default-k8s-diff-port-167150 crio[564]: time="2025-11-19T02:44:46.33320028Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	c8385e0cc96e0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           20 seconds ago      Running             storage-provisioner         1                   4da141227582a       storage-provisioner                                    kube-system
	bcec9b49dd5da       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           23 seconds ago      Exited              dashboard-metrics-scraper   2                   fdf526a9ab55e       dashboard-metrics-scraper-6ffb444bf9-f7h7r             kubernetes-dashboard
	85d4c9fe32d3e       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   43 seconds ago      Running             kubernetes-dashboard        0                   87019f71636be       kubernetes-dashboard-855c9754f9-p96nm                  kubernetes-dashboard
	eeac877f14e22       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           51 seconds ago      Running             coredns                     0                   bb5ef9ce42a26       coredns-66bc5c9577-bht2q                               kube-system
	5af6ef838078a       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           51 seconds ago      Running             busybox                     1                   33a76352dd3d3       busybox                                                default
	f7ef5557ab210       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           51 seconds ago      Running             kindnet-cni                 0                   22978c72787eb       kindnet-rs6jh                                          kube-system
	1a72617903e63       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           51 seconds ago      Exited              storage-provisioner         0                   4da141227582a       storage-provisioner                                    kube-system
	84cc1d377e54e       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           51 seconds ago      Running             kube-proxy                  0                   b067eea99e08f       kube-proxy-8gl4n                                       kube-system
	0850d32773d17       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           54 seconds ago      Running             etcd                        0                   bd597e9ff78d5       etcd-default-k8s-diff-port-167150                      kube-system
	299bbab984622       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           54 seconds ago      Running             kube-controller-manager     0                   b78fc90e86aec       kube-controller-manager-default-k8s-diff-port-167150   kube-system
	7cdb91f637031       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           54 seconds ago      Running             kube-scheduler              0                   5932253f606ba       kube-scheduler-default-k8s-diff-port-167150            kube-system
	f308d3728814c       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           54 seconds ago      Running             kube-apiserver              0                   66d9550e6658b       kube-apiserver-default-k8s-diff-port-167150            kube-system
	
	
	==> coredns [eeac877f14e22971c9a442dc5730f94bf48becadd83cf5c234243e980bc2e2dd] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53121 - 41446 "HINFO IN 5048112904534079613.722412660483301482. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.482980222s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-167150
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-167150
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ebd49efe8b7d44f89fc96e21beffcd64dfee8277
	                    minikube.k8s.io/name=default-k8s-diff-port-167150
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T02_43_08_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 02:43:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-167150
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 02:44:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 02:44:35 +0000   Wed, 19 Nov 2025 02:43:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 02:44:35 +0000   Wed, 19 Nov 2025 02:43:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 02:44:35 +0000   Wed, 19 Nov 2025 02:43:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 02:44:35 +0000   Wed, 19 Nov 2025 02:43:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    default-k8s-diff-port-167150
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863340Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863340Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf10fb2f940d419c1d138723691cfee8
	  System UUID:                e0cfffa3-371a-463d-bbd7-aef4f2317c27
	  Boot ID:                    b74e7f3d-91af-4c1b-82f6-1745dfd2e678
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 coredns-66bc5c9577-bht2q                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     103s
	  kube-system                 etcd-default-k8s-diff-port-167150                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         109s
	  kube-system                 kindnet-rs6jh                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      104s
	  kube-system                 kube-apiserver-default-k8s-diff-port-167150             250m (3%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-167150    200m (2%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-proxy-8gl4n                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 kube-scheduler-default-k8s-diff-port-167150             100m (1%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-f7h7r              0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-p96nm                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 102s                 kube-proxy       
	  Normal  Starting                 50s                  kube-proxy       
	  Normal  Starting                 114s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  114s (x8 over 114s)  kubelet          Node default-k8s-diff-port-167150 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    114s (x8 over 114s)  kubelet          Node default-k8s-diff-port-167150 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     114s (x8 over 114s)  kubelet          Node default-k8s-diff-port-167150 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    109s                 kubelet          Node default-k8s-diff-port-167150 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  109s                 kubelet          Node default-k8s-diff-port-167150 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     109s                 kubelet          Node default-k8s-diff-port-167150 status is now: NodeHasSufficientPID
	  Normal  Starting                 109s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           104s                 node-controller  Node default-k8s-diff-port-167150 event: Registered Node default-k8s-diff-port-167150 in Controller
	  Normal  NodeReady                92s                  kubelet          Node default-k8s-diff-port-167150 status is now: NodeReady
	  Normal  Starting                 55s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  55s (x8 over 55s)    kubelet          Node default-k8s-diff-port-167150 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    55s (x8 over 55s)    kubelet          Node default-k8s-diff-port-167150 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     55s (x8 over 55s)    kubelet          Node default-k8s-diff-port-167150 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           48s                  node-controller  Node default-k8s-diff-port-167150 event: Registered Node default-k8s-diff-port-167150 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: c6 2a 10 38 98 ce 7e af 25 93 50 8b 08 00
	[Nov19 02:40] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 69 19 13 d2 34 08 06
	[  +0.000303] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff de 82 c7 57 ef 49 08 06
	[Nov19 02:41] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 82 b6 eb cd 7e a7 08 06
	[  +0.001170] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 6a 20 a4 3b 82 10 08 06
	[ +12.842438] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 1e 35 97 65 5e 2e 08 06
	[  +4.187285] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ba 20 d4 3c 25 0c 08 06
	[ +19.742639] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6e e8 d1 08 45 d2 08 06
	[  +0.000327] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ba 20 d4 3c 25 0c 08 06
	[Nov19 02:42] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 2b 58 8a 05 dc 08 06
	[  +0.000340] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 b6 eb cd 7e a7 08 06
	[ +10.661146] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b6 1d bb 8d c6 48 08 06
	[  +0.000382] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 1e 35 97 65 5e 2e 08 06
	
	
	==> etcd [0850d32773d1729f97e0f3baf42d1b3638a7327abc66f584efafbdaa4334a283] <==
	{"level":"warn","ts":"2025-11-19T02:44:03.737405Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:03.754025Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:03.768589Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:03.786000Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:03.802958Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:03.823351Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:03.836051Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:03.850311Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52882","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:03.865759Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:03.891326Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:03.922036Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:03.933872Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:03.954737Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:03.975812Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:03.999649Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53032","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:04.023650Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:04.068701Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:04.086916Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:04.094769Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:04.213255Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:07.701890Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"113.326342ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/attachdetach-controller\" limit:1 ","response":"range_response_count:1 size:212"}
	{"level":"info","ts":"2025-11-19T02:44:07.702010Z","caller":"traceutil/trace.go:172","msg":"trace[624585779] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/attachdetach-controller; range_end:; response_count:1; response_revision:546; }","duration":"113.455209ms","start":"2025-11-19T02:44:07.588513Z","end":"2025-11-19T02:44:07.701969Z","steps":["trace[624585779] 'agreement among raft nodes before linearized reading'  (duration: 24.961272ms)","trace[624585779] 'range keys from in-memory index tree'  (duration: 88.255178ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-19T02:44:07.702344Z","caller":"traceutil/trace.go:172","msg":"trace[1148738554] transaction","detail":"{read_only:false; response_revision:547; number_of_response:1; }","duration":"118.504323ms","start":"2025-11-19T02:44:07.583813Z","end":"2025-11-19T02:44:07.702317Z","steps":["trace[1148738554] 'process raft request'  (duration: 29.706883ms)","trace[1148738554] 'compare'  (duration: 88.190835ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-19T02:44:33.663078Z","caller":"traceutil/trace.go:172","msg":"trace[1009595485] transaction","detail":"{read_only:false; response_revision:635; number_of_response:1; }","duration":"126.067301ms","start":"2025-11-19T02:44:33.536996Z","end":"2025-11-19T02:44:33.663064Z","steps":["trace[1009595485] 'process raft request'  (duration: 125.938237ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T02:44:34.625524Z","caller":"traceutil/trace.go:172","msg":"trace[88433709] transaction","detail":"{read_only:false; response_revision:638; number_of_response:1; }","duration":"163.630142ms","start":"2025-11-19T02:44:34.461876Z","end":"2025-11-19T02:44:34.625506Z","steps":["trace[88433709] 'process raft request'  (duration: 163.457435ms)"],"step_count":1}
	
	
	==> kernel <==
	 02:44:56 up  1:27,  0 user,  load average: 3.11, 3.26, 2.28
	Linux default-k8s-diff-port-167150 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f7ef5557ab210d9505a17d76636e0b17ddac6b55834fdc5d6452172261f6d65e] <==
	I1119 02:44:06.013988       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 02:44:06.014482       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1119 02:44:06.014659       1 main.go:148] setting mtu 1500 for CNI 
	I1119 02:44:06.014670       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 02:44:06.014679       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T02:44:06Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 02:44:06.307592       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 02:44:06.308880       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 02:44:06.308906       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 02:44:06.309016       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1119 02:44:36.308758       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1119 02:44:36.308765       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1119 02:44:36.308778       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1119 02:44:36.308769       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1119 02:44:37.709132       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1119 02:44:37.709158       1 metrics.go:72] Registering metrics
	I1119 02:44:37.709218       1 controller.go:711] "Syncing nftables rules"
	I1119 02:44:46.307726       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1119 02:44:46.307784       1 main.go:301] handling current node
	I1119 02:44:56.308590       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1119 02:44:56.308641       1 main.go:301] handling current node
	
	
	==> kube-apiserver [f308d3728814cf13897a458da3b827483ae71b6a4cf2cb0fd38e141e14586a3e] <==
	I1119 02:44:05.191085       1 aggregator.go:171] initial CRD sync complete...
	I1119 02:44:05.191290       1 autoregister_controller.go:144] Starting autoregister controller
	I1119 02:44:05.191398       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1119 02:44:05.191574       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1119 02:44:05.191599       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1119 02:44:05.191552       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1119 02:44:05.191583       1 cache.go:39] Caches are synced for autoregister controller
	I1119 02:44:05.192293       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 02:44:05.193251       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1119 02:44:05.193883       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1119 02:44:05.194743       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1119 02:44:05.194774       1 policy_source.go:240] refreshing policies
	I1119 02:44:05.204829       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1119 02:44:05.230151       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 02:44:05.427954       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1119 02:44:05.855018       1 controller.go:667] quota admission added evaluator for: namespaces
	I1119 02:44:05.912019       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1119 02:44:05.940058       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1119 02:44:05.952255       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1119 02:44:06.021289       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.32.142"}
	I1119 02:44:06.038621       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.86.5"}
	I1119 02:44:06.074922       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 02:44:08.895714       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1119 02:44:08.998639       1 controller.go:667] quota admission added evaluator for: endpoints
	I1119 02:44:09.100623       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [299bbab984622e99c9bf240099fd1891299f48da807c2b0ab1553ad4885d7c13] <==
	I1119 02:44:08.550067       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1119 02:44:08.552460       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1119 02:44:08.552656       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 02:44:08.553089       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1119 02:44:08.553170       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1119 02:44:08.553246       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-167150"
	I1119 02:44:08.553383       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1119 02:44:08.554960       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1119 02:44:08.555042       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1119 02:44:08.556602       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1119 02:44:08.556645       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1119 02:44:08.556672       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1119 02:44:08.556851       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1119 02:44:08.556897       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1119 02:44:08.557883       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1119 02:44:08.561756       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1119 02:44:08.565101       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1119 02:44:08.568318       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1119 02:44:08.572373       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1119 02:44:08.578731       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1119 02:44:08.589252       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 02:44:08.591590       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 02:44:08.591625       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1119 02:44:08.591636       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1119 02:44:09.107077       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kubernetes-dashboard/kubernetes-dashboard" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-proxy [84cc1d377e54ec20e1b335bb3f2c1a89459c779c9092721db31a065e74db7d72] <==
	I1119 02:44:05.861356       1 server_linux.go:53] "Using iptables proxy"
	I1119 02:44:05.995481       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 02:44:06.098389       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 02:44:06.098744       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1119 02:44:06.098930       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 02:44:06.169060       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 02:44:06.169181       1 server_linux.go:132] "Using iptables Proxier"
	I1119 02:44:06.178578       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 02:44:06.179678       1 server.go:527] "Version info" version="v1.34.1"
	I1119 02:44:06.180040       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 02:44:06.182194       1 config.go:200] "Starting service config controller"
	I1119 02:44:06.182242       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 02:44:06.182406       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 02:44:06.182616       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 02:44:06.182705       1 config.go:106] "Starting endpoint slice config controller"
	I1119 02:44:06.183018       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 02:44:06.183355       1 config.go:309] "Starting node config controller"
	I1119 02:44:06.183371       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 02:44:06.282910       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1119 02:44:06.283046       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1119 02:44:06.283105       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1119 02:44:06.283687       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [7cdb91f63703193832fa8fc84ec766b4d87e2ac3e24887dcbcb074dfdac9634d] <==
	I1119 02:44:02.675999       1 serving.go:386] Generated self-signed cert in-memory
	W1119 02:44:05.142623       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1119 02:44:05.142657       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found, role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found]
	W1119 02:44:05.142670       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1119 02:44:05.142679       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1119 02:44:05.211931       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1119 02:44:05.211965       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 02:44:05.215566       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 02:44:05.215604       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 02:44:05.216613       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1119 02:44:05.216944       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1119 02:44:05.316872       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 19 02:44:09 default-k8s-diff-port-167150 kubelet[724]: I1119 02:44:09.148179     724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/fb3343ba-3948-4d60-a357-b6b9a574f8c0-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-f7h7r\" (UID: \"fb3343ba-3948-4d60-a357-b6b9a574f8c0\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-f7h7r"
	Nov 19 02:44:09 default-k8s-diff-port-167150 kubelet[724]: I1119 02:44:09.148240     724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffz6r\" (UniqueName: \"kubernetes.io/projected/fb3343ba-3948-4d60-a357-b6b9a574f8c0-kube-api-access-ffz6r\") pod \"dashboard-metrics-scraper-6ffb444bf9-f7h7r\" (UID: \"fb3343ba-3948-4d60-a357-b6b9a574f8c0\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-f7h7r"
	Nov 19 02:44:09 default-k8s-diff-port-167150 kubelet[724]: I1119 02:44:09.148269     724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4sjm\" (UniqueName: \"kubernetes.io/projected/6f86096a-5658-426d-b3dc-6edeb5e215e9-kube-api-access-b4sjm\") pod \"kubernetes-dashboard-855c9754f9-p96nm\" (UID: \"6f86096a-5658-426d-b3dc-6edeb5e215e9\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-p96nm"
	Nov 19 02:44:09 default-k8s-diff-port-167150 kubelet[724]: I1119 02:44:09.148300     724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/6f86096a-5658-426d-b3dc-6edeb5e215e9-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-p96nm\" (UID: \"6f86096a-5658-426d-b3dc-6edeb5e215e9\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-p96nm"
	Nov 19 02:44:10 default-k8s-diff-port-167150 kubelet[724]: I1119 02:44:10.797595     724 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 19 02:44:15 default-k8s-diff-port-167150 kubelet[724]: I1119 02:44:15.529601     724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-p96nm" podStartSLOduration=2.320065993 podStartE2EDuration="6.529573786s" podCreationTimestamp="2025-11-19 02:44:09 +0000 UTC" firstStartedPulling="2025-11-19 02:44:09.4193254 +0000 UTC m=+8.238486735" lastFinishedPulling="2025-11-19 02:44:13.628833195 +0000 UTC m=+12.447994528" observedRunningTime="2025-11-19 02:44:14.419235775 +0000 UTC m=+13.238397120" watchObservedRunningTime="2025-11-19 02:44:15.529573786 +0000 UTC m=+14.348735099"
	Nov 19 02:44:17 default-k8s-diff-port-167150 kubelet[724]: I1119 02:44:17.407310     724 scope.go:117] "RemoveContainer" containerID="612f5363273342aeb336f4a770199d354a5fc3b4b39e5d551b13fe66e77c2931"
	Nov 19 02:44:18 default-k8s-diff-port-167150 kubelet[724]: I1119 02:44:18.412175     724 scope.go:117] "RemoveContainer" containerID="612f5363273342aeb336f4a770199d354a5fc3b4b39e5d551b13fe66e77c2931"
	Nov 19 02:44:18 default-k8s-diff-port-167150 kubelet[724]: I1119 02:44:18.412320     724 scope.go:117] "RemoveContainer" containerID="f150345ad791442f22b7f6b4745b2e5f2c92dbf66ad6ed2f2e2f720cbbadf497"
	Nov 19 02:44:18 default-k8s-diff-port-167150 kubelet[724]: E1119 02:44:18.412543     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-f7h7r_kubernetes-dashboard(fb3343ba-3948-4d60-a357-b6b9a574f8c0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-f7h7r" podUID="fb3343ba-3948-4d60-a357-b6b9a574f8c0"
	Nov 19 02:44:19 default-k8s-diff-port-167150 kubelet[724]: I1119 02:44:19.416165     724 scope.go:117] "RemoveContainer" containerID="f150345ad791442f22b7f6b4745b2e5f2c92dbf66ad6ed2f2e2f720cbbadf497"
	Nov 19 02:44:19 default-k8s-diff-port-167150 kubelet[724]: E1119 02:44:19.416354     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-f7h7r_kubernetes-dashboard(fb3343ba-3948-4d60-a357-b6b9a574f8c0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-f7h7r" podUID="fb3343ba-3948-4d60-a357-b6b9a574f8c0"
	Nov 19 02:44:22 default-k8s-diff-port-167150 kubelet[724]: I1119 02:44:22.481197     724 scope.go:117] "RemoveContainer" containerID="f150345ad791442f22b7f6b4745b2e5f2c92dbf66ad6ed2f2e2f720cbbadf497"
	Nov 19 02:44:22 default-k8s-diff-port-167150 kubelet[724]: E1119 02:44:22.481364     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-f7h7r_kubernetes-dashboard(fb3343ba-3948-4d60-a357-b6b9a574f8c0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-f7h7r" podUID="fb3343ba-3948-4d60-a357-b6b9a574f8c0"
	Nov 19 02:44:33 default-k8s-diff-port-167150 kubelet[724]: I1119 02:44:33.307347     724 scope.go:117] "RemoveContainer" containerID="f150345ad791442f22b7f6b4745b2e5f2c92dbf66ad6ed2f2e2f720cbbadf497"
	Nov 19 02:44:34 default-k8s-diff-port-167150 kubelet[724]: I1119 02:44:34.453904     724 scope.go:117] "RemoveContainer" containerID="f150345ad791442f22b7f6b4745b2e5f2c92dbf66ad6ed2f2e2f720cbbadf497"
	Nov 19 02:44:34 default-k8s-diff-port-167150 kubelet[724]: I1119 02:44:34.454145     724 scope.go:117] "RemoveContainer" containerID="bcec9b49dd5da59561dcca13fbccd1ddc4abfb2a5907b4002476e334cc41c669"
	Nov 19 02:44:34 default-k8s-diff-port-167150 kubelet[724]: E1119 02:44:34.454329     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-f7h7r_kubernetes-dashboard(fb3343ba-3948-4d60-a357-b6b9a574f8c0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-f7h7r" podUID="fb3343ba-3948-4d60-a357-b6b9a574f8c0"
	Nov 19 02:44:36 default-k8s-diff-port-167150 kubelet[724]: I1119 02:44:36.462761     724 scope.go:117] "RemoveContainer" containerID="1a72617903e6314079ce5b0564a60457c183f68e6e318bd4b089000d5050df80"
	Nov 19 02:44:42 default-k8s-diff-port-167150 kubelet[724]: I1119 02:44:42.480982     724 scope.go:117] "RemoveContainer" containerID="bcec9b49dd5da59561dcca13fbccd1ddc4abfb2a5907b4002476e334cc41c669"
	Nov 19 02:44:42 default-k8s-diff-port-167150 kubelet[724]: E1119 02:44:42.481153     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-f7h7r_kubernetes-dashboard(fb3343ba-3948-4d60-a357-b6b9a574f8c0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-f7h7r" podUID="fb3343ba-3948-4d60-a357-b6b9a574f8c0"
	Nov 19 02:44:54 default-k8s-diff-port-167150 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 19 02:44:54 default-k8s-diff-port-167150 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 19 02:44:54 default-k8s-diff-port-167150 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 19 02:44:54 default-k8s-diff-port-167150 systemd[1]: kubelet.service: Consumed 1.655s CPU time.
	
	
	==> kubernetes-dashboard [85d4c9fe32d3e2d0a6c2edf0b44354e46f03306212318e808dee8bf625ac5497] <==
	2025/11/19 02:44:13 Starting overwatch
	2025/11/19 02:44:13 Using namespace: kubernetes-dashboard
	2025/11/19 02:44:13 Using in-cluster config to connect to apiserver
	2025/11/19 02:44:13 Using secret token for csrf signing
	2025/11/19 02:44:13 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/19 02:44:13 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/19 02:44:13 Successful initial request to the apiserver, version: v1.34.1
	2025/11/19 02:44:13 Generating JWE encryption key
	2025/11/19 02:44:13 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/19 02:44:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/19 02:44:14 Initializing JWE encryption key from synchronized object
	2025/11/19 02:44:14 Creating in-cluster Sidecar client
	2025/11/19 02:44:14 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/19 02:44:14 Serving insecurely on HTTP port: 9090
	2025/11/19 02:44:44 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [1a72617903e6314079ce5b0564a60457c183f68e6e318bd4b089000d5050df80] <==
	I1119 02:44:05.804826       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1119 02:44:35.807919       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [c8385e0cc96e047d9f413c86fd578dff340203a9b9a1ef224657235179328132] <==
	I1119 02:44:36.511391       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1119 02:44:36.518461       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1119 02:44:36.518500       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1119 02:44:36.520125       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:44:39.975209       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:44:44.235534       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:44:47.834224       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:44:50.887992       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:44:53.910122       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:44:53.917518       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 02:44:53.917752       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1119 02:44:53.918110       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-167150_c6291bef-03db-4dcb-845b-565c6bf2bb39!
	I1119 02:44:53.917878       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3714f73f-a3cc-42cd-ae7e-a03ea89c8e13", APIVersion:"v1", ResourceVersion:"658", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-167150_c6291bef-03db-4dcb-845b-565c6bf2bb39 became leader
	W1119 02:44:53.922086       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:44:53.926462       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 02:44:54.019737       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-167150_c6291bef-03db-4dcb-845b-565c6bf2bb39!
	W1119 02:44:55.929197       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:44:55.933149       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-167150 -n default-k8s-diff-port-167150
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-167150 -n default-k8s-diff-port-167150: exit status 2 (338.613519ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-167150 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-167150
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-167150:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "eba2f66817ceb9445f5247c309c2690fcb320f53eb011c42a1ea1cd06c438d62",
	        "Created": "2025-11-19T02:42:49.168084052Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 322071,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T02:43:54.569776656Z",
	            "FinishedAt": "2025-11-19T02:43:53.705001054Z"
	        },
	        "Image": "sha256:a368e3d71517ce17114afb6c9921965419df972dd0e2d32a9973a8946f0910a3",
	        "ResolvConfPath": "/var/lib/docker/containers/eba2f66817ceb9445f5247c309c2690fcb320f53eb011c42a1ea1cd06c438d62/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/eba2f66817ceb9445f5247c309c2690fcb320f53eb011c42a1ea1cd06c438d62/hostname",
	        "HostsPath": "/var/lib/docker/containers/eba2f66817ceb9445f5247c309c2690fcb320f53eb011c42a1ea1cd06c438d62/hosts",
	        "LogPath": "/var/lib/docker/containers/eba2f66817ceb9445f5247c309c2690fcb320f53eb011c42a1ea1cd06c438d62/eba2f66817ceb9445f5247c309c2690fcb320f53eb011c42a1ea1cd06c438d62-json.log",
	        "Name": "/default-k8s-diff-port-167150",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-167150:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-167150",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "eba2f66817ceb9445f5247c309c2690fcb320f53eb011c42a1ea1cd06c438d62",
	                "LowerDir": "/var/lib/docker/overlay2/05015802710c07cf873b6416e0594c96689d6d543f9392019be507a57324d9f4-init/diff:/var/lib/docker/overlay2/10dcbd04dd53463a9d8dc947ef22df9efa1b3a4d07707317c3869fa39d5b22a7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/05015802710c07cf873b6416e0594c96689d6d543f9392019be507a57324d9f4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/05015802710c07cf873b6416e0594c96689d6d543f9392019be507a57324d9f4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/05015802710c07cf873b6416e0594c96689d6d543f9392019be507a57324d9f4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-167150",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-167150/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-167150",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-167150",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-167150",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "a7c75d8f7228900b6b4577f09d59e758ad61b773bdaad84cf94e376da66c89e3",
	            "SandboxKey": "/var/run/docker/netns/a7c75d8f7228",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33118"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33119"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33122"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33120"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33121"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-167150": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "446e8ca4ab47bdd8748a928cc2372566c0406b83b85d73866a63b7236a1153af",
	                    "EndpointID": "6d2b808ccb785c5597aec3aa1a3d24aed925055b8551b38bb4b8e035f6985c5f",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "b6:ad:db:08:68:f4",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-167150",
	                        "eba2f66817ce"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-167150 -n default-k8s-diff-port-167150
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-167150 -n default-k8s-diff-port-167150: exit status 2 (331.487923ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-167150 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-167150 logs -n 25: (1.072742649s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ stop    │ -p embed-certs-811173 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-811173           │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:43 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-167150 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-167150 │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-167150 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-167150 │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:43 UTC │
	│ addons  │ enable metrics-server -p no-preload-837474 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-837474            │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │                     │
	│ addons  │ enable dashboard -p old-k8s-version-987573 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-987573       │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:43 UTC │
	│ start   │ -p old-k8s-version-987573 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-987573       │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:44 UTC │
	│ stop    │ -p no-preload-837474 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-837474            │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:43 UTC │
	│ addons  │ enable dashboard -p embed-certs-811173 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-811173           │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:43 UTC │
	│ start   │ -p embed-certs-811173 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-811173           │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:44 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-167150 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-167150 │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:43 UTC │
	│ start   │ -p default-k8s-diff-port-167150 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-167150 │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:44 UTC │
	│ addons  │ enable dashboard -p no-preload-837474 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-837474            │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:43 UTC │
	│ start   │ -p no-preload-837474 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-837474            │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:44 UTC │
	│ image   │ old-k8s-version-987573 image list --format=json                                                                                                                                                                                               │ old-k8s-version-987573       │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │ 19 Nov 25 02:44 UTC │
	│ pause   │ -p old-k8s-version-987573 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-987573       │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │                     │
	│ delete  │ -p old-k8s-version-987573                                                                                                                                                                                                                     │ old-k8s-version-987573       │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │ 19 Nov 25 02:44 UTC │
	│ delete  │ -p old-k8s-version-987573                                                                                                                                                                                                                     │ old-k8s-version-987573       │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │ 19 Nov 25 02:44 UTC │
	│ start   │ -p newest-cni-956139 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-956139            │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │ 19 Nov 25 02:44 UTC │
	│ image   │ embed-certs-811173 image list --format=json                                                                                                                                                                                                   │ embed-certs-811173           │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │ 19 Nov 25 02:44 UTC │
	│ pause   │ -p embed-certs-811173 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-811173           │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │                     │
	│ delete  │ -p embed-certs-811173                                                                                                                                                                                                                         │ embed-certs-811173           │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │ 19 Nov 25 02:44 UTC │
	│ image   │ default-k8s-diff-port-167150 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-167150 │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │ 19 Nov 25 02:44 UTC │
	│ pause   │ -p default-k8s-diff-port-167150 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-167150 │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-956139 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-956139            │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │                     │
	│ delete  │ -p embed-certs-811173                                                                                                                                                                                                                         │ embed-certs-811173           │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │ 19 Nov 25 02:44 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 02:44:29
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 02:44:29.891671  330644 out.go:360] Setting OutFile to fd 1 ...
	I1119 02:44:29.891773  330644 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:44:29.891782  330644 out.go:374] Setting ErrFile to fd 2...
	I1119 02:44:29.891786  330644 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:44:29.892013  330644 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-11126/.minikube/bin
	I1119 02:44:29.892489  330644 out.go:368] Setting JSON to false
	I1119 02:44:29.893932  330644 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5217,"bootTime":1763515053,"procs":351,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 02:44:29.894009  330644 start.go:143] virtualization: kvm guest
	I1119 02:44:29.896106  330644 out.go:179] * [newest-cni-956139] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1119 02:44:29.897349  330644 out.go:179]   - MINIKUBE_LOCATION=21924
	I1119 02:44:29.897381  330644 notify.go:221] Checking for updates...
	I1119 02:44:29.899649  330644 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 02:44:29.900639  330644 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21924-11126/kubeconfig
	I1119 02:44:29.901703  330644 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-11126/.minikube
	I1119 02:44:29.902810  330644 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1119 02:44:29.903920  330644 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 02:44:29.905455  330644 config.go:182] Loaded profile config "default-k8s-diff-port-167150": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:44:29.905620  330644 config.go:182] Loaded profile config "embed-certs-811173": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:44:29.905735  330644 config.go:182] Loaded profile config "no-preload-837474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:44:29.905864  330644 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 02:44:29.930269  330644 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1119 02:44:29.930390  330644 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 02:44:29.990852  330644 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-19 02:44:29.980528215 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652060160 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 02:44:29.990974  330644 docker.go:319] overlay module found
	I1119 02:44:29.992680  330644 out.go:179] * Using the docker driver based on user configuration
	I1119 02:44:29.993882  330644 start.go:309] selected driver: docker
	I1119 02:44:29.993897  330644 start.go:930] validating driver "docker" against <nil>
	I1119 02:44:29.993908  330644 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 02:44:29.994485  330644 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 02:44:30.055174  330644 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-19 02:44:30.045301349 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652060160 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 02:44:30.055367  330644 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1119 02:44:30.055398  330644 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1119 02:44:30.055690  330644 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1119 02:44:30.057878  330644 out.go:179] * Using Docker driver with root privileges
	I1119 02:44:30.059068  330644 cni.go:84] Creating CNI manager for ""
	I1119 02:44:30.059130  330644 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 02:44:30.059141  330644 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1119 02:44:30.059196  330644 start.go:353] cluster config:
	{Name:newest-cni-956139 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-956139 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:44:30.060543  330644 out.go:179] * Starting "newest-cni-956139" primary control-plane node in "newest-cni-956139" cluster
	I1119 02:44:30.061681  330644 cache.go:134] Beginning downloading kic base image for docker with crio
	I1119 02:44:30.062975  330644 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1119 02:44:30.064114  330644 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 02:44:30.064143  330644 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21924-11126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1119 02:44:30.064167  330644 cache.go:65] Caching tarball of preloaded images
	I1119 02:44:30.064199  330644 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1119 02:44:30.064251  330644 preload.go:238] Found /home/jenkins/minikube-integration/21924-11126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1119 02:44:30.064266  330644 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1119 02:44:30.064364  330644 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/config.json ...
	I1119 02:44:30.064387  330644 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/config.json: {Name:mk5f6a602a7486c803f28ee981bc4fb72f30089f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:44:30.086997  330644 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1119 02:44:30.087020  330644 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1119 02:44:30.087033  330644 cache.go:243] Successfully downloaded all kic artifacts
	I1119 02:44:30.087059  330644 start.go:360] acquireMachinesLock for newest-cni-956139: {Name:mk15a132b2574a22e8a886ba5601ed901f63d00c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 02:44:30.087146  330644 start.go:364] duration metric: took 69.531µs to acquireMachinesLock for "newest-cni-956139"
	I1119 02:44:30.087169  330644 start.go:93] Provisioning new machine with config: &{Name:newest-cni-956139 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-956139 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 02:44:30.087250  330644 start.go:125] createHost starting for "" (driver="docker")
	W1119 02:44:25.920223  322722 pod_ready.go:104] pod "coredns-66bc5c9577-44bdr" is not "Ready", error: <nil>
	W1119 02:44:28.420250  322722 pod_ready.go:104] pod "coredns-66bc5c9577-44bdr" is not "Ready", error: <nil>
	W1119 02:44:30.420774  322722 pod_ready.go:104] pod "coredns-66bc5c9577-44bdr" is not "Ready", error: <nil>
	W1119 02:44:29.634283  320707 pod_ready.go:104] pod "coredns-66bc5c9577-6zqr2" is not "Ready", error: <nil>
	W1119 02:44:31.634456  320707 pod_ready.go:104] pod "coredns-66bc5c9577-6zqr2" is not "Ready", error: <nil>
	W1119 02:44:34.134853  320707 pod_ready.go:104] pod "coredns-66bc5c9577-6zqr2" is not "Ready", error: <nil>
	W1119 02:44:29.824614  321785 pod_ready.go:104] pod "coredns-66bc5c9577-bht2q" is not "Ready", error: <nil>
	W1119 02:44:31.825210  321785 pod_ready.go:104] pod "coredns-66bc5c9577-bht2q" is not "Ready", error: <nil>
	W1119 02:44:33.861933  321785 pod_ready.go:104] pod "coredns-66bc5c9577-bht2q" is not "Ready", error: <nil>
	I1119 02:44:30.090250  330644 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1119 02:44:30.090540  330644 start.go:159] libmachine.API.Create for "newest-cni-956139" (driver="docker")
	I1119 02:44:30.090580  330644 client.go:173] LocalClient.Create starting
	I1119 02:44:30.090711  330644 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem
	I1119 02:44:30.090762  330644 main.go:143] libmachine: Decoding PEM data...
	I1119 02:44:30.090788  330644 main.go:143] libmachine: Parsing certificate...
	I1119 02:44:30.090868  330644 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21924-11126/.minikube/certs/cert.pem
	I1119 02:44:30.090897  330644 main.go:143] libmachine: Decoding PEM data...
	I1119 02:44:30.090911  330644 main.go:143] libmachine: Parsing certificate...
	I1119 02:44:30.091311  330644 cli_runner.go:164] Run: docker network inspect newest-cni-956139 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1119 02:44:30.108825  330644 cli_runner.go:211] docker network inspect newest-cni-956139 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1119 02:44:30.108874  330644 network_create.go:284] running [docker network inspect newest-cni-956139] to gather additional debugging logs...
	I1119 02:44:30.108888  330644 cli_runner.go:164] Run: docker network inspect newest-cni-956139
	W1119 02:44:30.125848  330644 cli_runner.go:211] docker network inspect newest-cni-956139 returned with exit code 1
	I1119 02:44:30.125873  330644 network_create.go:287] error running [docker network inspect newest-cni-956139]: docker network inspect newest-cni-956139: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-956139 not found
	I1119 02:44:30.125887  330644 network_create.go:289] output of [docker network inspect newest-cni-956139]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-956139 not found
	
	** /stderr **
	I1119 02:44:30.126008  330644 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 02:44:30.145372  330644 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-84ce244e4c23 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:16:55:7c:db:e3:4e} reservation:<nil>}
	I1119 02:44:30.146006  330644 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-70e7d73f86d8 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:8a:64:3f:46:8e:7a} reservation:<nil>}
	I1119 02:44:30.146778  330644 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-d7ef477b5a23 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:fa:eb:22:b3:62:92} reservation:<nil>}
	I1119 02:44:30.147612  330644 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ee2320}
	I1119 02:44:30.147633  330644 network_create.go:124] attempt to create docker network newest-cni-956139 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1119 02:44:30.147689  330644 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-956139 newest-cni-956139
	I1119 02:44:30.194747  330644 network_create.go:108] docker network newest-cni-956139 192.168.76.0/24 created
	I1119 02:44:30.194772  330644 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-956139" container
	I1119 02:44:30.194838  330644 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1119 02:44:30.212175  330644 cli_runner.go:164] Run: docker volume create newest-cni-956139 --label name.minikube.sigs.k8s.io=newest-cni-956139 --label created_by.minikube.sigs.k8s.io=true
	I1119 02:44:30.229588  330644 oci.go:103] Successfully created a docker volume newest-cni-956139
	I1119 02:44:30.229664  330644 cli_runner.go:164] Run: docker run --rm --name newest-cni-956139-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-956139 --entrypoint /usr/bin/test -v newest-cni-956139:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -d /var/lib
	I1119 02:44:30.612069  330644 oci.go:107] Successfully prepared a docker volume newest-cni-956139
	I1119 02:44:30.612124  330644 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 02:44:30.612132  330644 kic.go:194] Starting extracting preloaded images to volume ...
	I1119 02:44:30.612187  330644 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21924-11126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-956139:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir
	W1119 02:44:32.919409  322722 pod_ready.go:104] pod "coredns-66bc5c9577-44bdr" is not "Ready", error: <nil>
	W1119 02:44:34.920166  322722 pod_ready.go:104] pod "coredns-66bc5c9577-44bdr" is not "Ready", error: <nil>
	I1119 02:44:34.646141  320707 pod_ready.go:94] pod "coredns-66bc5c9577-6zqr2" is "Ready"
	I1119 02:44:34.646170  320707 pod_ready.go:86] duration metric: took 35.016957338s for pod "coredns-66bc5c9577-6zqr2" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:34.648819  320707 pod_ready.go:83] waiting for pod "etcd-embed-certs-811173" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:34.831828  320707 pod_ready.go:94] pod "etcd-embed-certs-811173" is "Ready"
	I1119 02:44:34.831852  320707 pod_ready.go:86] duration metric: took 183.006168ms for pod "etcd-embed-certs-811173" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:34.834239  320707 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-811173" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:34.837643  320707 pod_ready.go:94] pod "kube-apiserver-embed-certs-811173" is "Ready"
	I1119 02:44:34.837663  320707 pod_ready.go:86] duration metric: took 3.400351ms for pod "kube-apiserver-embed-certs-811173" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:34.839329  320707 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-811173" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:34.842652  320707 pod_ready.go:94] pod "kube-controller-manager-embed-certs-811173" is "Ready"
	I1119 02:44:34.842670  320707 pod_ready.go:86] duration metric: took 3.319388ms for pod "kube-controller-manager-embed-certs-811173" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:35.032627  320707 pod_ready.go:83] waiting for pod "kube-proxy-s5bzz" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:35.432934  320707 pod_ready.go:94] pod "kube-proxy-s5bzz" is "Ready"
	I1119 02:44:35.432959  320707 pod_ready.go:86] duration metric: took 400.306652ms for pod "kube-proxy-s5bzz" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:35.633961  320707 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-811173" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:36.032469  320707 pod_ready.go:94] pod "kube-scheduler-embed-certs-811173" is "Ready"
	I1119 02:44:36.032499  320707 pod_ready.go:86] duration metric: took 398.480495ms for pod "kube-scheduler-embed-certs-811173" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:36.032511  320707 pod_ready.go:40] duration metric: took 36.406499301s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 02:44:36.080404  320707 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1119 02:44:36.082160  320707 out.go:179] * Done! kubectl is now configured to use "embed-certs-811173" cluster and "default" namespace by default
	I1119 02:44:34.960079  330644 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21924-11126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-956139:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir: (4.347852696s)
	I1119 02:44:34.960108  330644 kic.go:203] duration metric: took 4.347972861s to extract preloaded images to volume ...
	W1119 02:44:34.960206  330644 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1119 02:44:34.960254  330644 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1119 02:44:34.960300  330644 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1119 02:44:35.014083  330644 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-956139 --name newest-cni-956139 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-956139 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-956139 --network newest-cni-956139 --ip 192.168.76.2 --volume newest-cni-956139:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a
	I1119 02:44:35.325493  330644 cli_runner.go:164] Run: docker container inspect newest-cni-956139 --format={{.State.Running}}
	I1119 02:44:35.343669  330644 cli_runner.go:164] Run: docker container inspect newest-cni-956139 --format={{.State.Status}}
	I1119 02:44:35.361759  330644 cli_runner.go:164] Run: docker exec newest-cni-956139 stat /var/lib/dpkg/alternatives/iptables
	I1119 02:44:35.406925  330644 oci.go:144] the created container "newest-cni-956139" has a running status.
	I1119 02:44:35.406959  330644 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21924-11126/.minikube/machines/newest-cni-956139/id_rsa...
	I1119 02:44:35.779267  330644 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21924-11126/.minikube/machines/newest-cni-956139/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1119 02:44:35.805615  330644 cli_runner.go:164] Run: docker container inspect newest-cni-956139 --format={{.State.Status}}
	I1119 02:44:35.826512  330644 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1119 02:44:35.826530  330644 kic_runner.go:114] Args: [docker exec --privileged newest-cni-956139 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1119 02:44:35.871319  330644 cli_runner.go:164] Run: docker container inspect newest-cni-956139 --format={{.State.Status}}
	I1119 02:44:35.889991  330644 machine.go:94] provisionDockerMachine start ...
	I1119 02:44:35.890097  330644 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956139
	I1119 02:44:35.909789  330644 main.go:143] libmachine: Using SSH client type: native
	I1119 02:44:35.910136  330644 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1119 02:44:35.910158  330644 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 02:44:36.043778  330644 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-956139
	
	I1119 02:44:36.043805  330644 ubuntu.go:182] provisioning hostname "newest-cni-956139"
	I1119 02:44:36.043885  330644 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956139
	I1119 02:44:36.065697  330644 main.go:143] libmachine: Using SSH client type: native
	I1119 02:44:36.065904  330644 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1119 02:44:36.065918  330644 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-956139 && echo "newest-cni-956139" | sudo tee /etc/hostname
	I1119 02:44:36.211004  330644 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-956139
	
	I1119 02:44:36.211088  330644 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956139
	I1119 02:44:36.229392  330644 main.go:143] libmachine: Using SSH client type: native
	I1119 02:44:36.229616  330644 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1119 02:44:36.229635  330644 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-956139' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-956139/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-956139' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 02:44:36.359138  330644 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 02:44:36.359177  330644 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21924-11126/.minikube CaCertPath:/home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21924-11126/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21924-11126/.minikube}
	I1119 02:44:36.359210  330644 ubuntu.go:190] setting up certificates
	I1119 02:44:36.359219  330644 provision.go:84] configureAuth start
	I1119 02:44:36.359262  330644 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-956139
	I1119 02:44:36.381048  330644 provision.go:143] copyHostCerts
	I1119 02:44:36.381118  330644 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-11126/.minikube/key.pem, removing ...
	I1119 02:44:36.381134  330644 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-11126/.minikube/key.pem
	I1119 02:44:36.381241  330644 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21924-11126/.minikube/key.pem (1675 bytes)
	I1119 02:44:36.381393  330644 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-11126/.minikube/ca.pem, removing ...
	I1119 02:44:36.381407  330644 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-11126/.minikube/ca.pem
	I1119 02:44:36.381473  330644 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21924-11126/.minikube/ca.pem (1082 bytes)
	I1119 02:44:36.381598  330644 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-11126/.minikube/cert.pem, removing ...
	I1119 02:44:36.381613  330644 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-11126/.minikube/cert.pem
	I1119 02:44:36.381659  330644 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21924-11126/.minikube/cert.pem (1123 bytes)
	I1119 02:44:36.381762  330644 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21924-11126/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca-key.pem org=jenkins.newest-cni-956139 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-956139]
	I1119 02:44:36.425094  330644 provision.go:177] copyRemoteCerts
	I1119 02:44:36.425145  330644 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 02:44:36.425178  330644 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956139
	I1119 02:44:36.444152  330644 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/newest-cni-956139/id_rsa Username:docker}
	I1119 02:44:36.542494  330644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1119 02:44:36.560963  330644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1119 02:44:36.577617  330644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1119 02:44:36.594302  330644 provision.go:87] duration metric: took 235.073311ms to configureAuth
	I1119 02:44:36.594322  330644 ubuntu.go:206] setting minikube options for container-runtime
	I1119 02:44:36.594527  330644 config.go:182] Loaded profile config "newest-cni-956139": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:44:36.594625  330644 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956139
	I1119 02:44:36.612019  330644 main.go:143] libmachine: Using SSH client type: native
	I1119 02:44:36.612218  330644 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1119 02:44:36.612232  330644 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 02:44:36.879790  330644 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 02:44:36.879819  330644 machine.go:97] duration metric: took 989.804229ms to provisionDockerMachine
	I1119 02:44:36.879830  330644 client.go:176] duration metric: took 6.789240603s to LocalClient.Create
	I1119 02:44:36.879851  330644 start.go:167] duration metric: took 6.789312626s to libmachine.API.Create "newest-cni-956139"
	I1119 02:44:36.879860  330644 start.go:293] postStartSetup for "newest-cni-956139" (driver="docker")
	I1119 02:44:36.879872  330644 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 02:44:36.879933  330644 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 02:44:36.879968  330644 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956139
	I1119 02:44:36.898156  330644 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/newest-cni-956139/id_rsa Username:docker}
	I1119 02:44:36.993744  330644 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 02:44:36.997203  330644 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 02:44:36.997235  330644 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 02:44:36.997254  330644 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-11126/.minikube/addons for local assets ...
	I1119 02:44:36.997312  330644 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-11126/.minikube/files for local assets ...
	I1119 02:44:36.997404  330644 filesync.go:149] local asset: /home/jenkins/minikube-integration/21924-11126/.minikube/files/etc/ssl/certs/146342.pem -> 146342.pem in /etc/ssl/certs
	I1119 02:44:36.997536  330644 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 02:44:37.005305  330644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/files/etc/ssl/certs/146342.pem --> /etc/ssl/certs/146342.pem (1708 bytes)
	I1119 02:44:37.024142  330644 start.go:296] duration metric: took 144.272497ms for postStartSetup
	I1119 02:44:37.024490  330644 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-956139
	I1119 02:44:37.042142  330644 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/config.json ...
	I1119 02:44:37.042364  330644 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 02:44:37.042421  330644 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956139
	I1119 02:44:37.060279  330644 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/newest-cni-956139/id_rsa Username:docker}
	I1119 02:44:37.151155  330644 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 02:44:37.155487  330644 start.go:128] duration metric: took 7.068223226s to createHost
	I1119 02:44:37.155509  330644 start.go:83] releasing machines lock for "newest-cni-956139", held for 7.068353821s
	I1119 02:44:37.155567  330644 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-956139
	I1119 02:44:37.172738  330644 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 02:44:37.172750  330644 ssh_runner.go:195] Run: cat /version.json
	I1119 02:44:37.172802  330644 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956139
	I1119 02:44:37.172817  330644 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956139
	I1119 02:44:37.191403  330644 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/newest-cni-956139/id_rsa Username:docker}
	I1119 02:44:37.191761  330644 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/newest-cni-956139/id_rsa Username:docker}
	I1119 02:44:37.349781  330644 ssh_runner.go:195] Run: systemctl --version
	I1119 02:44:37.356447  330644 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 02:44:37.390971  330644 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 02:44:37.395386  330644 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 02:44:37.395452  330644 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 02:44:37.420966  330644 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1119 02:44:37.421000  330644 start.go:496] detecting cgroup driver to use...
	I1119 02:44:37.421031  330644 detect.go:190] detected "systemd" cgroup driver on host os
	I1119 02:44:37.421116  330644 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 02:44:37.437016  330644 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 02:44:37.448636  330644 docker.go:218] disabling cri-docker service (if available) ...
	I1119 02:44:37.448680  330644 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 02:44:37.464103  330644 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 02:44:37.483229  330644 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 02:44:37.569719  330644 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 02:44:37.663891  330644 docker.go:234] disabling docker service ...
	I1119 02:44:37.663946  330644 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 02:44:37.684672  330644 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 02:44:37.699707  330644 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 02:44:37.783938  330644 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 02:44:37.866466  330644 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 02:44:37.878906  330644 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 02:44:37.893148  330644 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 02:44:37.893200  330644 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:44:37.903765  330644 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1119 02:44:37.903825  330644 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:44:37.912380  330644 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:44:37.922240  330644 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:44:37.930944  330644 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 02:44:37.938625  330644 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:44:37.947066  330644 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:44:37.960171  330644 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:44:37.968261  330644 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 02:44:37.975267  330644 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 02:44:37.982398  330644 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:44:38.060067  330644 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 02:44:38.192960  330644 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 02:44:38.193022  330644 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 02:44:38.196763  330644 start.go:564] Will wait 60s for crictl version
	I1119 02:44:38.196824  330644 ssh_runner.go:195] Run: which crictl
	I1119 02:44:38.200161  330644 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 02:44:38.225001  330644 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1119 02:44:38.225065  330644 ssh_runner.go:195] Run: crio --version
	I1119 02:44:38.251944  330644 ssh_runner.go:195] Run: crio --version
	I1119 02:44:38.282138  330644 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1119 02:44:38.283487  330644 cli_runner.go:164] Run: docker network inspect newest-cni-956139 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 02:44:38.300312  330644 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1119 02:44:38.304280  330644 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 02:44:38.315573  330644 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1119 02:44:36.325065  321785 pod_ready.go:104] pod "coredns-66bc5c9577-bht2q" is not "Ready", error: <nil>
	W1119 02:44:38.824893  321785 pod_ready.go:104] pod "coredns-66bc5c9577-bht2q" is not "Ready", error: <nil>
	I1119 02:44:38.316650  330644 kubeadm.go:884] updating cluster {Name:newest-cni-956139 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-956139 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 02:44:38.316772  330644 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 02:44:38.316823  330644 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 02:44:38.347925  330644 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 02:44:38.347943  330644 crio.go:433] Images already preloaded, skipping extraction
	I1119 02:44:38.348024  330644 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 02:44:38.371370  330644 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 02:44:38.371386  330644 cache_images.go:86] Images are preloaded, skipping loading
	I1119 02:44:38.371393  330644 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1119 02:44:38.371489  330644 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-956139 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-956139 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 02:44:38.371568  330644 ssh_runner.go:195] Run: crio config
	I1119 02:44:38.414403  330644 cni.go:84] Creating CNI manager for ""
	I1119 02:44:38.414425  330644 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 02:44:38.414455  330644 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1119 02:44:38.414480  330644 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-956139 NodeName:newest-cni-956139 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 02:44:38.414596  330644 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-956139"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 02:44:38.414650  330644 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 02:44:38.422980  330644 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 02:44:38.423037  330644 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 02:44:38.430764  330644 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1119 02:44:38.442899  330644 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 02:44:38.457503  330644 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1119 02:44:38.470194  330644 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1119 02:44:38.473583  330644 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 02:44:38.482869  330644 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:44:38.562300  330644 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 02:44:38.585622  330644 certs.go:69] Setting up /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139 for IP: 192.168.76.2
	I1119 02:44:38.585639  330644 certs.go:195] generating shared ca certs ...
	I1119 02:44:38.585658  330644 certs.go:227] acquiring lock for ca certs: {Name:mkfa94aafd627ee4c1a185e05aa520339d3c22d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:44:38.585812  330644 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21924-11126/.minikube/ca.key
	I1119 02:44:38.585880  330644 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21924-11126/.minikube/proxy-client-ca.key
	I1119 02:44:38.585900  330644 certs.go:257] generating profile certs ...
	I1119 02:44:38.585973  330644 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/client.key
	I1119 02:44:38.585994  330644 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/client.crt with IP's: []
	I1119 02:44:38.886736  330644 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/client.crt ...
	I1119 02:44:38.886761  330644 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/client.crt: {Name:mkb981b48727217d5d544f8c1ece639a24196b8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:44:38.886914  330644 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/client.key ...
	I1119 02:44:38.886927  330644 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/client.key: {Name:mkf09d335927b94ecd83db709f24055ce131f9c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:44:38.887002  330644 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/apiserver.key.107d1e6d
	I1119 02:44:38.887016  330644 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/apiserver.crt.107d1e6d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1119 02:44:39.078031  330644 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/apiserver.crt.107d1e6d ...
	I1119 02:44:39.078059  330644 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/apiserver.crt.107d1e6d: {Name:mkcff50d0bd0e5de553650f0790abc33df1f3d40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:44:39.078203  330644 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/apiserver.key.107d1e6d ...
	I1119 02:44:39.078217  330644 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/apiserver.key.107d1e6d: {Name:mk332d91d4c4926805e4ae3abcbd91571604bef5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:44:39.078314  330644 certs.go:382] copying /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/apiserver.crt.107d1e6d -> /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/apiserver.crt
	I1119 02:44:39.078410  330644 certs.go:386] copying /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/apiserver.key.107d1e6d -> /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/apiserver.key
	I1119 02:44:39.078500  330644 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/proxy-client.key
	I1119 02:44:39.078517  330644 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/proxy-client.crt with IP's: []
	I1119 02:44:39.492473  330644 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/proxy-client.crt ...
	I1119 02:44:39.492501  330644 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/proxy-client.crt: {Name:mk2d2a0752005ddbf3ff7866b2d888f6c88921c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:44:39.492685  330644 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/proxy-client.key ...
	I1119 02:44:39.492708  330644 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/proxy-client.key: {Name:mk0676b22a9381558c3b1f8b4d9f9ded76cf6a58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:44:39.492943  330644 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/14634.pem (1338 bytes)
	W1119 02:44:39.492986  330644 certs.go:480] ignoring /home/jenkins/minikube-integration/21924-11126/.minikube/certs/14634_empty.pem, impossibly tiny 0 bytes
	I1119 02:44:39.493002  330644 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca-key.pem (1671 bytes)
	I1119 02:44:39.493035  330644 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem (1082 bytes)
	I1119 02:44:39.493063  330644 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/cert.pem (1123 bytes)
	I1119 02:44:39.493096  330644 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/key.pem (1675 bytes)
	I1119 02:44:39.493152  330644 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/files/etc/ssl/certs/146342.pem (1708 bytes)
	I1119 02:44:39.493921  330644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 02:44:39.511675  330644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1119 02:44:39.528321  330644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 02:44:39.545416  330644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1119 02:44:39.561752  330644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1119 02:44:39.578259  330644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1119 02:44:39.594332  330644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 02:44:39.610201  330644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1119 02:44:39.626532  330644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/files/etc/ssl/certs/146342.pem --> /usr/share/ca-certificates/146342.pem (1708 bytes)
	I1119 02:44:39.646920  330644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 02:44:39.663725  330644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/certs/14634.pem --> /usr/share/ca-certificates/14634.pem (1338 bytes)
	I1119 02:44:39.680824  330644 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 02:44:39.692613  330644 ssh_runner.go:195] Run: openssl version
	I1119 02:44:39.699229  330644 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/146342.pem && ln -fs /usr/share/ca-certificates/146342.pem /etc/ssl/certs/146342.pem"
	I1119 02:44:39.708084  330644 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/146342.pem
	I1119 02:44:39.711716  330644 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 02:02 /usr/share/ca-certificates/146342.pem
	I1119 02:44:39.711771  330644 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/146342.pem
	I1119 02:44:39.746645  330644 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/146342.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 02:44:39.754713  330644 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 02:44:39.762929  330644 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:44:39.766299  330644 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 01:56 /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:44:39.766335  330644 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:44:39.800570  330644 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 02:44:39.808541  330644 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14634.pem && ln -fs /usr/share/ca-certificates/14634.pem /etc/ssl/certs/14634.pem"
	I1119 02:44:39.816270  330644 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14634.pem
	I1119 02:44:39.819952  330644 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 02:02 /usr/share/ca-certificates/14634.pem
	I1119 02:44:39.819989  330644 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14634.pem
	I1119 02:44:39.854738  330644 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14634.pem /etc/ssl/certs/51391683.0"
	I1119 02:44:39.863275  330644 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 02:44:39.866811  330644 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1119 02:44:39.866866  330644 kubeadm.go:401] StartCluster: {Name:newest-cni-956139 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-956139 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:44:39.866959  330644 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 02:44:39.867032  330644 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 02:44:39.893234  330644 cri.go:89] found id: ""
	I1119 02:44:39.893298  330644 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 02:44:39.901084  330644 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1119 02:44:39.908779  330644 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1119 02:44:39.908820  330644 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1119 02:44:39.915918  330644 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1119 02:44:39.915956  330644 kubeadm.go:158] found existing configuration files:
	
	I1119 02:44:39.916000  330644 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1119 02:44:39.924150  330644 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1119 02:44:39.924192  330644 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1119 02:44:39.931134  330644 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1119 02:44:39.938135  330644 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1119 02:44:39.938182  330644 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1119 02:44:39.945082  330644 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1119 02:44:39.952377  330644 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1119 02:44:39.952425  330644 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1119 02:44:39.959861  330644 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1119 02:44:39.966757  330644 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1119 02:44:39.966801  330644 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1119 02:44:39.973926  330644 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1119 02:44:40.012094  330644 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1119 02:44:40.012170  330644 kubeadm.go:319] [preflight] Running pre-flight checks
	I1119 02:44:40.051599  330644 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1119 02:44:40.051753  330644 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1119 02:44:40.051826  330644 kubeadm.go:319] OS: Linux
	I1119 02:44:40.051888  330644 kubeadm.go:319] CGROUPS_CPU: enabled
	I1119 02:44:40.051939  330644 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1119 02:44:40.052007  330644 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1119 02:44:40.052083  330644 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1119 02:44:40.052163  330644 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1119 02:44:40.052233  330644 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1119 02:44:40.052284  330644 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1119 02:44:40.052344  330644 kubeadm.go:319] CGROUPS_IO: enabled
	I1119 02:44:40.110629  330644 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1119 02:44:40.110786  330644 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1119 02:44:40.110919  330644 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1119 02:44:40.118761  330644 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1119 02:44:37.420903  322722 pod_ready.go:104] pod "coredns-66bc5c9577-44bdr" is not "Ready", error: <nil>
	W1119 02:44:39.920505  322722 pod_ready.go:104] pod "coredns-66bc5c9577-44bdr" is not "Ready", error: <nil>
	I1119 02:44:40.823992  321785 pod_ready.go:94] pod "coredns-66bc5c9577-bht2q" is "Ready"
	I1119 02:44:40.824024  321785 pod_ready.go:86] duration metric: took 34.00468535s for pod "coredns-66bc5c9577-bht2q" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:40.826065  321785 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-167150" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:40.829510  321785 pod_ready.go:94] pod "etcd-default-k8s-diff-port-167150" is "Ready"
	I1119 02:44:40.829533  321785 pod_ready.go:86] duration metric: took 3.445845ms for pod "etcd-default-k8s-diff-port-167150" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:40.831135  321785 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-167150" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:40.834490  321785 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-167150" is "Ready"
	I1119 02:44:40.834508  321785 pod_ready.go:86] duration metric: took 3.353905ms for pod "kube-apiserver-default-k8s-diff-port-167150" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:40.836222  321785 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-167150" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:41.022776  321785 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-167150" is "Ready"
	I1119 02:44:41.022802  321785 pod_ready.go:86] duration metric: took 186.560827ms for pod "kube-controller-manager-default-k8s-diff-port-167150" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:41.222650  321785 pod_ready.go:83] waiting for pod "kube-proxy-8gl4n" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:41.623243  321785 pod_ready.go:94] pod "kube-proxy-8gl4n" is "Ready"
	I1119 02:44:41.623276  321785 pod_ready.go:86] duration metric: took 400.60046ms for pod "kube-proxy-8gl4n" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:41.823313  321785 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-167150" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:42.222639  321785 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-167150" is "Ready"
	I1119 02:44:42.222665  321785 pod_ready.go:86] duration metric: took 399.326737ms for pod "kube-scheduler-default-k8s-diff-port-167150" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:42.222675  321785 pod_ready.go:40] duration metric: took 35.410146964s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 02:44:42.265461  321785 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1119 02:44:42.267962  321785 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-167150" cluster and "default" namespace by default
	I1119 02:44:40.120572  330644 out.go:252]   - Generating certificates and keys ...
	I1119 02:44:40.120676  330644 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1119 02:44:40.120767  330644 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1119 02:44:40.285783  330644 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1119 02:44:40.596128  330644 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1119 02:44:40.775594  330644 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1119 02:44:40.856728  330644 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1119 02:44:41.447992  330644 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1119 02:44:41.448141  330644 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-956139] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1119 02:44:42.120936  330644 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1119 02:44:42.121139  330644 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-956139] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1119 02:44:42.400506  330644 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1119 02:44:42.544344  330644 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1119 02:44:42.820587  330644 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1119 02:44:42.820689  330644 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1119 02:44:42.995265  330644 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1119 02:44:43.162291  330644 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1119 02:44:43.196763  330644 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1119 02:44:43.556128  330644 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1119 02:44:43.787728  330644 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1119 02:44:43.788303  330644 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1119 02:44:43.792218  330644 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1119 02:44:43.793609  330644 out.go:252]   - Booting up control plane ...
	I1119 02:44:43.793714  330644 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1119 02:44:43.793818  330644 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1119 02:44:43.794447  330644 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1119 02:44:43.811365  330644 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1119 02:44:43.811606  330644 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1119 02:44:43.817701  330644 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1119 02:44:43.818010  330644 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1119 02:44:43.818083  330644 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1119 02:44:43.912675  330644 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1119 02:44:43.912849  330644 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	W1119 02:44:42.419894  322722 pod_ready.go:104] pod "coredns-66bc5c9577-44bdr" is not "Ready", error: <nil>
	W1119 02:44:44.921381  322722 pod_ready.go:104] pod "coredns-66bc5c9577-44bdr" is not "Ready", error: <nil>
	I1119 02:44:46.419827  322722 pod_ready.go:94] pod "coredns-66bc5c9577-44bdr" is "Ready"
	I1119 02:44:46.419857  322722 pod_ready.go:86] duration metric: took 38.00494675s for pod "coredns-66bc5c9577-44bdr" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:46.422128  322722 pod_ready.go:83] waiting for pod "etcd-no-preload-837474" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:46.425877  322722 pod_ready.go:94] pod "etcd-no-preload-837474" is "Ready"
	I1119 02:44:46.425901  322722 pod_ready.go:86] duration metric: took 3.744715ms for pod "etcd-no-preload-837474" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:46.427596  322722 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-837474" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:46.430915  322722 pod_ready.go:94] pod "kube-apiserver-no-preload-837474" is "Ready"
	I1119 02:44:46.430936  322722 pod_ready.go:86] duration metric: took 3.318971ms for pod "kube-apiserver-no-preload-837474" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:46.432827  322722 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-837474" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:46.619267  322722 pod_ready.go:94] pod "kube-controller-manager-no-preload-837474" is "Ready"
	I1119 02:44:46.619298  322722 pod_ready.go:86] duration metric: took 186.448054ms for pod "kube-controller-manager-no-preload-837474" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:46.819349  322722 pod_ready.go:83] waiting for pod "kube-proxy-hmxzk" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:47.219089  322722 pod_ready.go:94] pod "kube-proxy-hmxzk" is "Ready"
	I1119 02:44:47.219115  322722 pod_ready.go:86] duration metric: took 399.745795ms for pod "kube-proxy-hmxzk" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:47.418899  322722 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-837474" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:47.819293  322722 pod_ready.go:94] pod "kube-scheduler-no-preload-837474" is "Ready"
	I1119 02:44:47.819318  322722 pod_ready.go:86] duration metric: took 400.396392ms for pod "kube-scheduler-no-preload-837474" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:47.819332  322722 pod_ready.go:40] duration metric: took 39.409998426s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 02:44:47.882918  322722 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1119 02:44:47.884667  322722 out.go:179] * Done! kubectl is now configured to use "no-preload-837474" cluster and "default" namespace by default
	I1119 02:44:44.914267  330644 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001584412s
	I1119 02:44:44.919834  330644 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1119 02:44:44.919954  330644 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1119 02:44:44.920098  330644 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1119 02:44:44.920202  330644 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1119 02:44:46.082445  330644 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.162579737s
	I1119 02:44:46.762642  330644 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.842786839s
	I1119 02:44:48.421451  330644 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501654588s
	I1119 02:44:48.432989  330644 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1119 02:44:48.442965  330644 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1119 02:44:48.450246  330644 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1119 02:44:48.450564  330644 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-956139 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1119 02:44:48.457630  330644 kubeadm.go:319] [bootstrap-token] Using token: bpq1za.q7wy15mme3dprzfy
	I1119 02:44:48.458785  330644 out.go:252]   - Configuring RBAC rules ...
	I1119 02:44:48.458936  330644 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1119 02:44:48.461935  330644 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1119 02:44:48.466914  330644 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1119 02:44:48.469590  330644 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1119 02:44:48.472718  330644 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1119 02:44:48.475031  330644 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1119 02:44:48.827275  330644 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1119 02:44:49.241863  330644 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1119 02:44:49.827545  330644 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1119 02:44:49.828386  330644 kubeadm.go:319] 
	I1119 02:44:49.828472  330644 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1119 02:44:49.828485  330644 kubeadm.go:319] 
	I1119 02:44:49.828608  330644 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1119 02:44:49.828625  330644 kubeadm.go:319] 
	I1119 02:44:49.828650  330644 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1119 02:44:49.828731  330644 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1119 02:44:49.828818  330644 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1119 02:44:49.828832  330644 kubeadm.go:319] 
	I1119 02:44:49.828906  330644 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1119 02:44:49.828916  330644 kubeadm.go:319] 
	I1119 02:44:49.828980  330644 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1119 02:44:49.828990  330644 kubeadm.go:319] 
	I1119 02:44:49.829055  330644 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1119 02:44:49.829166  330644 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1119 02:44:49.829226  330644 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1119 02:44:49.829233  330644 kubeadm.go:319] 
	I1119 02:44:49.829341  330644 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1119 02:44:49.829450  330644 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1119 02:44:49.829464  330644 kubeadm.go:319] 
	I1119 02:44:49.829567  330644 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token bpq1za.q7wy15mme3dprzfy \
	I1119 02:44:49.829694  330644 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4165d7789cb5b41e2fdbecffedd9aece71a7c390b7f673c978e979d9f1b4ab32 \
	I1119 02:44:49.829727  330644 kubeadm.go:319] 	--control-plane 
	I1119 02:44:49.829737  330644 kubeadm.go:319] 
	I1119 02:44:49.829830  330644 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1119 02:44:49.829840  330644 kubeadm.go:319] 
	I1119 02:44:49.829940  330644 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token bpq1za.q7wy15mme3dprzfy \
	I1119 02:44:49.830063  330644 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4165d7789cb5b41e2fdbecffedd9aece71a7c390b7f673c978e979d9f1b4ab32 
	I1119 02:44:49.832633  330644 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1119 02:44:49.832729  330644 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1119 02:44:49.832752  330644 cni.go:84] Creating CNI manager for ""
	I1119 02:44:49.832761  330644 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 02:44:49.834994  330644 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1119 02:44:49.836244  330644 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1119 02:44:49.840560  330644 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1119 02:44:49.840576  330644 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1119 02:44:49.852577  330644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1119 02:44:50.080027  330644 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1119 02:44:50.080080  330644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:44:50.080111  330644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-956139 minikube.k8s.io/updated_at=2025_11_19T02_44_50_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=ebd49efe8b7d44f89fc96e21beffcd64dfee8277 minikube.k8s.io/name=newest-cni-956139 minikube.k8s.io/primary=true
	I1119 02:44:50.181807  330644 ops.go:34] apiserver oom_adj: -16
	I1119 02:44:50.183726  330644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:44:50.684625  330644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:44:51.184631  330644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:44:51.684630  330644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:44:52.184401  330644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:44:52.684596  330644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:44:53.183868  330644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:44:53.683849  330644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:44:54.184175  330644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:44:54.684642  330644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:44:55.184680  330644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:44:55.255079  330644 kubeadm.go:1114] duration metric: took 5.175044255s to wait for elevateKubeSystemPrivileges
	I1119 02:44:55.255111  330644 kubeadm.go:403] duration metric: took 15.388250216s to StartCluster
	I1119 02:44:55.255131  330644 settings.go:142] acquiring lock: {Name:mk39bd61177f2f8808b442f427cccc3032092975 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:44:55.255207  330644 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21924-11126/kubeconfig
	I1119 02:44:55.257307  330644 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/kubeconfig: {Name:mkb0d0ec188d51add3fa67ae624c9c11581068d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:44:55.257611  330644 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 02:44:55.257651  330644 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1119 02:44:55.257666  330644 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 02:44:55.257759  330644 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-956139"
	I1119 02:44:55.257779  330644 addons.go:70] Setting default-storageclass=true in profile "newest-cni-956139"
	I1119 02:44:55.257784  330644 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-956139"
	I1119 02:44:55.257825  330644 host.go:66] Checking if "newest-cni-956139" exists ...
	I1119 02:44:55.257829  330644 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-956139"
	I1119 02:44:55.257852  330644 config.go:182] Loaded profile config "newest-cni-956139": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:44:55.258176  330644 cli_runner.go:164] Run: docker container inspect newest-cni-956139 --format={{.State.Status}}
	I1119 02:44:55.258487  330644 cli_runner.go:164] Run: docker container inspect newest-cni-956139 --format={{.State.Status}}
	I1119 02:44:55.259164  330644 out.go:179] * Verifying Kubernetes components...
	I1119 02:44:55.261074  330644 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:44:55.287881  330644 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 02:44:55.288607  330644 addons.go:239] Setting addon default-storageclass=true in "newest-cni-956139"
	I1119 02:44:55.288655  330644 host.go:66] Checking if "newest-cni-956139" exists ...
	I1119 02:44:55.288995  330644 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 02:44:55.289013  330644 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 02:44:55.289063  330644 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956139
	I1119 02:44:55.289112  330644 cli_runner.go:164] Run: docker container inspect newest-cni-956139 --format={{.State.Status}}
	I1119 02:44:55.320315  330644 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 02:44:55.320506  330644 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 02:44:55.320689  330644 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956139
	I1119 02:44:55.327680  330644 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/newest-cni-956139/id_rsa Username:docker}
	I1119 02:44:55.349806  330644 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/newest-cni-956139/id_rsa Username:docker}
	I1119 02:44:55.379730  330644 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1119 02:44:55.451250  330644 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 02:44:55.457636  330644 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 02:44:55.480367  330644 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 02:44:55.612641  330644 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1119 02:44:55.614898  330644 api_server.go:52] waiting for apiserver process to appear ...
	I1119 02:44:55.614959  330644 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 02:44:55.811014  330644 api_server.go:72] duration metric: took 553.367498ms to wait for apiserver process to appear ...
	I1119 02:44:55.811040  330644 api_server.go:88] waiting for apiserver healthz status ...
	I1119 02:44:55.811059  330644 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 02:44:55.819776  330644 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1119 02:44:55.820578  330644 api_server.go:141] control plane version: v1.34.1
	I1119 02:44:55.820609  330644 api_server.go:131] duration metric: took 9.561354ms to wait for apiserver health ...
	I1119 02:44:55.820618  330644 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 02:44:55.822755  330644 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1119 02:44:55.823417  330644 system_pods.go:59] 8 kube-system pods found
	I1119 02:44:55.823483  330644 system_pods.go:61] "coredns-66bc5c9577-l7vmx" [0d704d05-424c-4c54-bdf6-a5ec01cbcbf8] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1119 02:44:55.823503  330644 system_pods.go:61] "etcd-newest-cni-956139" [724e0280-bcab-4c1e-aae3-5a7a72519d23] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1119 02:44:55.823514  330644 system_pods.go:61] "kindnet-s65nc" [20583cba-5129-470f-b6f9-869642b28f93] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1119 02:44:55.823537  330644 system_pods.go:61] "kube-apiserver-newest-cni-956139" [a81fa4fa-fea5-4996-9230-94e06fb3b276] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1119 02:44:55.823553  330644 system_pods.go:61] "kube-controller-manager-newest-cni-956139" [a93f6b9a-946c-4099-bbc0-139db17304e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1119 02:44:55.823562  330644 system_pods.go:61] "kube-proxy-7frpm" [7f447bc0-73e5-4008-b474-551b69553ce3] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1119 02:44:55.823583  330644 system_pods.go:61] "kube-scheduler-newest-cni-956139" [ebd7110b-7108-4bca-b86d-c7126087da9f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1119 02:44:55.823592  330644 system_pods.go:61] "storage-provisioner" [b8a81262-3433-4dd4-a802-58a9b4440545] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1119 02:44:55.823603  330644 system_pods.go:74] duration metric: took 2.978578ms to wait for pod list to return data ...
	I1119 02:44:55.823616  330644 default_sa.go:34] waiting for default service account to be created ...
	I1119 02:44:55.824202  330644 addons.go:515] duration metric: took 566.533433ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1119 02:44:55.825650  330644 default_sa.go:45] found service account: "default"
	I1119 02:44:55.825669  330644 default_sa.go:55] duration metric: took 2.044637ms for default service account to be created ...
	I1119 02:44:55.825682  330644 kubeadm.go:587] duration metric: took 568.038142ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1119 02:44:55.825701  330644 node_conditions.go:102] verifying NodePressure condition ...
	I1119 02:44:55.827786  330644 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1119 02:44:55.827811  330644 node_conditions.go:123] node cpu capacity is 8
	I1119 02:44:55.827828  330644 node_conditions.go:105] duration metric: took 2.120628ms to run NodePressure ...
	I1119 02:44:55.827844  330644 start.go:242] waiting for startup goroutines ...
	I1119 02:44:56.120226  330644 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-956139" context rescaled to 1 replicas
	I1119 02:44:56.120268  330644 start.go:247] waiting for cluster config update ...
	I1119 02:44:56.120378  330644 start.go:256] writing updated cluster config ...
	I1119 02:44:56.120780  330644 ssh_runner.go:195] Run: rm -f paused
	I1119 02:44:56.183785  330644 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1119 02:44:56.185393  330644 out.go:179] * Done! kubectl is now configured to use "newest-cni-956139" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 19 02:44:36 default-k8s-diff-port-167150 crio[564]: time="2025-11-19T02:44:36.469913056Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:44:36 default-k8s-diff-port-167150 crio[564]: time="2025-11-19T02:44:36.470101579Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/fffbca222c4405b893aec800a0b00d976f3440b90097f95695f841cc4df0f2a1/merged/etc/passwd: no such file or directory"
	Nov 19 02:44:36 default-k8s-diff-port-167150 crio[564]: time="2025-11-19T02:44:36.470135294Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/fffbca222c4405b893aec800a0b00d976f3440b90097f95695f841cc4df0f2a1/merged/etc/group: no such file or directory"
	Nov 19 02:44:36 default-k8s-diff-port-167150 crio[564]: time="2025-11-19T02:44:36.470419165Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:44:36 default-k8s-diff-port-167150 crio[564]: time="2025-11-19T02:44:36.497349934Z" level=info msg="Created container c8385e0cc96e047d9f413c86fd578dff340203a9b9a1ef224657235179328132: kube-system/storage-provisioner/storage-provisioner" id=e92c2200-c59f-4003-bcf3-51d0a15ad8f2 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 02:44:36 default-k8s-diff-port-167150 crio[564]: time="2025-11-19T02:44:36.49792222Z" level=info msg="Starting container: c8385e0cc96e047d9f413c86fd578dff340203a9b9a1ef224657235179328132" id=a5e87abb-c75b-4308-a6e9-ee7a96f89271 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 02:44:36 default-k8s-diff-port-167150 crio[564]: time="2025-11-19T02:44:36.499532389Z" level=info msg="Started container" PID=1704 containerID=c8385e0cc96e047d9f413c86fd578dff340203a9b9a1ef224657235179328132 description=kube-system/storage-provisioner/storage-provisioner id=a5e87abb-c75b-4308-a6e9-ee7a96f89271 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4da141227582a4cfa315aeb58b3d879fe26621fc1365da0d5be8196d50ec810f
	Nov 19 02:44:46 default-k8s-diff-port-167150 crio[564]: time="2025-11-19T02:44:46.307987267Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 02:44:46 default-k8s-diff-port-167150 crio[564]: time="2025-11-19T02:44:46.311964245Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 02:44:46 default-k8s-diff-port-167150 crio[564]: time="2025-11-19T02:44:46.311996111Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 19 02:44:46 default-k8s-diff-port-167150 crio[564]: time="2025-11-19T02:44:46.312020786Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 02:44:46 default-k8s-diff-port-167150 crio[564]: time="2025-11-19T02:44:46.317363587Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 02:44:46 default-k8s-diff-port-167150 crio[564]: time="2025-11-19T02:44:46.317391327Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 19 02:44:46 default-k8s-diff-port-167150 crio[564]: time="2025-11-19T02:44:46.317411705Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 02:44:46 default-k8s-diff-port-167150 crio[564]: time="2025-11-19T02:44:46.322704128Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 02:44:46 default-k8s-diff-port-167150 crio[564]: time="2025-11-19T02:44:46.322731444Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 19 02:44:46 default-k8s-diff-port-167150 crio[564]: time="2025-11-19T02:44:46.322752513Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 02:44:46 default-k8s-diff-port-167150 crio[564]: time="2025-11-19T02:44:46.326344526Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 02:44:46 default-k8s-diff-port-167150 crio[564]: time="2025-11-19T02:44:46.32636437Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 19 02:44:46 default-k8s-diff-port-167150 crio[564]: time="2025-11-19T02:44:46.326379151Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 02:44:46 default-k8s-diff-port-167150 crio[564]: time="2025-11-19T02:44:46.329947608Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 02:44:46 default-k8s-diff-port-167150 crio[564]: time="2025-11-19T02:44:46.329970091Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 19 02:44:46 default-k8s-diff-port-167150 crio[564]: time="2025-11-19T02:44:46.329987686Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 02:44:46 default-k8s-diff-port-167150 crio[564]: time="2025-11-19T02:44:46.333182725Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 02:44:46 default-k8s-diff-port-167150 crio[564]: time="2025-11-19T02:44:46.33320028Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	c8385e0cc96e0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           22 seconds ago      Running             storage-provisioner         1                   4da141227582a       storage-provisioner                                    kube-system
	bcec9b49dd5da       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           25 seconds ago      Exited              dashboard-metrics-scraper   2                   fdf526a9ab55e       dashboard-metrics-scraper-6ffb444bf9-f7h7r             kubernetes-dashboard
	85d4c9fe32d3e       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   45 seconds ago      Running             kubernetes-dashboard        0                   87019f71636be       kubernetes-dashboard-855c9754f9-p96nm                  kubernetes-dashboard
	eeac877f14e22       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           52 seconds ago      Running             coredns                     0                   bb5ef9ce42a26       coredns-66bc5c9577-bht2q                               kube-system
	5af6ef838078a       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           52 seconds ago      Running             busybox                     1                   33a76352dd3d3       busybox                                                default
	f7ef5557ab210       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           52 seconds ago      Running             kindnet-cni                 0                   22978c72787eb       kindnet-rs6jh                                          kube-system
	1a72617903e63       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           53 seconds ago      Exited              storage-provisioner         0                   4da141227582a       storage-provisioner                                    kube-system
	84cc1d377e54e       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           53 seconds ago      Running             kube-proxy                  0                   b067eea99e08f       kube-proxy-8gl4n                                       kube-system
	0850d32773d17       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           56 seconds ago      Running             etcd                        0                   bd597e9ff78d5       etcd-default-k8s-diff-port-167150                      kube-system
	299bbab984622       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           56 seconds ago      Running             kube-controller-manager     0                   b78fc90e86aec       kube-controller-manager-default-k8s-diff-port-167150   kube-system
	7cdb91f637031       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           56 seconds ago      Running             kube-scheduler              0                   5932253f606ba       kube-scheduler-default-k8s-diff-port-167150            kube-system
	f308d3728814c       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           56 seconds ago      Running             kube-apiserver              0                   66d9550e6658b       kube-apiserver-default-k8s-diff-port-167150            kube-system
	
	
	==> coredns [eeac877f14e22971c9a442dc5730f94bf48becadd83cf5c234243e980bc2e2dd] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53121 - 41446 "HINFO IN 5048112904534079613.722412660483301482. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.482980222s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-167150
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-167150
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ebd49efe8b7d44f89fc96e21beffcd64dfee8277
	                    minikube.k8s.io/name=default-k8s-diff-port-167150
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T02_43_08_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 02:43:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-167150
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 02:44:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 02:44:35 +0000   Wed, 19 Nov 2025 02:43:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 02:44:35 +0000   Wed, 19 Nov 2025 02:43:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 02:44:35 +0000   Wed, 19 Nov 2025 02:43:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 02:44:35 +0000   Wed, 19 Nov 2025 02:43:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    default-k8s-diff-port-167150
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863340Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863340Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf10fb2f940d419c1d138723691cfee8
	  System UUID:                e0cfffa3-371a-463d-bbd7-aef4f2317c27
	  Boot ID:                    b74e7f3d-91af-4c1b-82f6-1745dfd2e678
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 coredns-66bc5c9577-bht2q                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     105s
	  kube-system                 etcd-default-k8s-diff-port-167150                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         111s
	  kube-system                 kindnet-rs6jh                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      106s
	  kube-system                 kube-apiserver-default-k8s-diff-port-167150             250m (3%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-167150    200m (2%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-proxy-8gl4n                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kube-scheduler-default-k8s-diff-port-167150             100m (1%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-f7h7r              0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-p96nm                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 104s                 kube-proxy       
	  Normal  Starting                 52s                  kube-proxy       
	  Normal  Starting                 116s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  116s (x8 over 116s)  kubelet          Node default-k8s-diff-port-167150 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    116s (x8 over 116s)  kubelet          Node default-k8s-diff-port-167150 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     116s (x8 over 116s)  kubelet          Node default-k8s-diff-port-167150 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    111s                 kubelet          Node default-k8s-diff-port-167150 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  111s                 kubelet          Node default-k8s-diff-port-167150 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     111s                 kubelet          Node default-k8s-diff-port-167150 status is now: NodeHasSufficientPID
	  Normal  Starting                 111s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           106s                 node-controller  Node default-k8s-diff-port-167150 event: Registered Node default-k8s-diff-port-167150 in Controller
	  Normal  NodeReady                94s                  kubelet          Node default-k8s-diff-port-167150 status is now: NodeReady
	  Normal  Starting                 57s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  57s (x8 over 57s)    kubelet          Node default-k8s-diff-port-167150 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    57s (x8 over 57s)    kubelet          Node default-k8s-diff-port-167150 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     57s (x8 over 57s)    kubelet          Node default-k8s-diff-port-167150 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           50s                  node-controller  Node default-k8s-diff-port-167150 event: Registered Node default-k8s-diff-port-167150 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: c6 2a 10 38 98 ce 7e af 25 93 50 8b 08 00
	[Nov19 02:40] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 69 19 13 d2 34 08 06
	[  +0.000303] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff de 82 c7 57 ef 49 08 06
	[Nov19 02:41] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 82 b6 eb cd 7e a7 08 06
	[  +0.001170] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 6a 20 a4 3b 82 10 08 06
	[ +12.842438] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 1e 35 97 65 5e 2e 08 06
	[  +4.187285] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ba 20 d4 3c 25 0c 08 06
	[ +19.742639] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6e e8 d1 08 45 d2 08 06
	[  +0.000327] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ba 20 d4 3c 25 0c 08 06
	[Nov19 02:42] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 2b 58 8a 05 dc 08 06
	[  +0.000340] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 b6 eb cd 7e a7 08 06
	[ +10.661146] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b6 1d bb 8d c6 48 08 06
	[  +0.000382] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 1e 35 97 65 5e 2e 08 06
	
	
	==> etcd [0850d32773d1729f97e0f3baf42d1b3638a7327abc66f584efafbdaa4334a283] <==
	{"level":"warn","ts":"2025-11-19T02:44:03.737405Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:03.754025Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:03.768589Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:03.786000Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:03.802958Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:03.823351Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:03.836051Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:03.850311Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52882","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:03.865759Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:03.891326Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:03.922036Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:03.933872Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:03.954737Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:03.975812Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:03.999649Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53032","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:04.023650Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:04.068701Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:04.086916Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:04.094769Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:04.213255Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:07.701890Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"113.326342ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/attachdetach-controller\" limit:1 ","response":"range_response_count:1 size:212"}
	{"level":"info","ts":"2025-11-19T02:44:07.702010Z","caller":"traceutil/trace.go:172","msg":"trace[624585779] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/attachdetach-controller; range_end:; response_count:1; response_revision:546; }","duration":"113.455209ms","start":"2025-11-19T02:44:07.588513Z","end":"2025-11-19T02:44:07.701969Z","steps":["trace[624585779] 'agreement among raft nodes before linearized reading'  (duration: 24.961272ms)","trace[624585779] 'range keys from in-memory index tree'  (duration: 88.255178ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-19T02:44:07.702344Z","caller":"traceutil/trace.go:172","msg":"trace[1148738554] transaction","detail":"{read_only:false; response_revision:547; number_of_response:1; }","duration":"118.504323ms","start":"2025-11-19T02:44:07.583813Z","end":"2025-11-19T02:44:07.702317Z","steps":["trace[1148738554] 'process raft request'  (duration: 29.706883ms)","trace[1148738554] 'compare'  (duration: 88.190835ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-19T02:44:33.663078Z","caller":"traceutil/trace.go:172","msg":"trace[1009595485] transaction","detail":"{read_only:false; response_revision:635; number_of_response:1; }","duration":"126.067301ms","start":"2025-11-19T02:44:33.536996Z","end":"2025-11-19T02:44:33.663064Z","steps":["trace[1009595485] 'process raft request'  (duration: 125.938237ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T02:44:34.625524Z","caller":"traceutil/trace.go:172","msg":"trace[88433709] transaction","detail":"{read_only:false; response_revision:638; number_of_response:1; }","duration":"163.630142ms","start":"2025-11-19T02:44:34.461876Z","end":"2025-11-19T02:44:34.625506Z","steps":["trace[88433709] 'process raft request'  (duration: 163.457435ms)"],"step_count":1}
	
	
	==> kernel <==
	 02:44:58 up  1:27,  0 user,  load average: 3.11, 3.26, 2.28
	Linux default-k8s-diff-port-167150 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f7ef5557ab210d9505a17d76636e0b17ddac6b55834fdc5d6452172261f6d65e] <==
	I1119 02:44:06.013988       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 02:44:06.014482       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1119 02:44:06.014659       1 main.go:148] setting mtu 1500 for CNI 
	I1119 02:44:06.014670       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 02:44:06.014679       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T02:44:06Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 02:44:06.307592       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 02:44:06.308880       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 02:44:06.308906       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 02:44:06.309016       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1119 02:44:36.308758       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1119 02:44:36.308765       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1119 02:44:36.308778       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1119 02:44:36.308769       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1119 02:44:37.709132       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1119 02:44:37.709158       1 metrics.go:72] Registering metrics
	I1119 02:44:37.709218       1 controller.go:711] "Syncing nftables rules"
	I1119 02:44:46.307726       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1119 02:44:46.307784       1 main.go:301] handling current node
	I1119 02:44:56.308590       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1119 02:44:56.308641       1 main.go:301] handling current node
	
	
	==> kube-apiserver [f308d3728814cf13897a458da3b827483ae71b6a4cf2cb0fd38e141e14586a3e] <==
	I1119 02:44:05.191085       1 aggregator.go:171] initial CRD sync complete...
	I1119 02:44:05.191290       1 autoregister_controller.go:144] Starting autoregister controller
	I1119 02:44:05.191398       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1119 02:44:05.191574       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1119 02:44:05.191599       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1119 02:44:05.191552       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1119 02:44:05.191583       1 cache.go:39] Caches are synced for autoregister controller
	I1119 02:44:05.192293       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 02:44:05.193251       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1119 02:44:05.193883       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1119 02:44:05.194743       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1119 02:44:05.194774       1 policy_source.go:240] refreshing policies
	I1119 02:44:05.204829       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1119 02:44:05.230151       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 02:44:05.427954       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1119 02:44:05.855018       1 controller.go:667] quota admission added evaluator for: namespaces
	I1119 02:44:05.912019       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1119 02:44:05.940058       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1119 02:44:05.952255       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1119 02:44:06.021289       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.32.142"}
	I1119 02:44:06.038621       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.86.5"}
	I1119 02:44:06.074922       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 02:44:08.895714       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1119 02:44:08.998639       1 controller.go:667] quota admission added evaluator for: endpoints
	I1119 02:44:09.100623       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [299bbab984622e99c9bf240099fd1891299f48da807c2b0ab1553ad4885d7c13] <==
	I1119 02:44:08.550067       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1119 02:44:08.552460       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1119 02:44:08.552656       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 02:44:08.553089       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1119 02:44:08.553170       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1119 02:44:08.553246       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-167150"
	I1119 02:44:08.553383       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1119 02:44:08.554960       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1119 02:44:08.555042       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1119 02:44:08.556602       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1119 02:44:08.556645       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1119 02:44:08.556672       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1119 02:44:08.556851       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1119 02:44:08.556897       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1119 02:44:08.557883       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1119 02:44:08.561756       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1119 02:44:08.565101       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1119 02:44:08.568318       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1119 02:44:08.572373       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1119 02:44:08.578731       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1119 02:44:08.589252       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 02:44:08.591590       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 02:44:08.591625       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1119 02:44:08.591636       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1119 02:44:09.107077       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kubernetes-dashboard/kubernetes-dashboard" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-proxy [84cc1d377e54ec20e1b335bb3f2c1a89459c779c9092721db31a065e74db7d72] <==
	I1119 02:44:05.861356       1 server_linux.go:53] "Using iptables proxy"
	I1119 02:44:05.995481       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 02:44:06.098389       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 02:44:06.098744       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1119 02:44:06.098930       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 02:44:06.169060       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 02:44:06.169181       1 server_linux.go:132] "Using iptables Proxier"
	I1119 02:44:06.178578       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 02:44:06.179678       1 server.go:527] "Version info" version="v1.34.1"
	I1119 02:44:06.180040       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 02:44:06.182194       1 config.go:200] "Starting service config controller"
	I1119 02:44:06.182242       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 02:44:06.182406       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 02:44:06.182616       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 02:44:06.182705       1 config.go:106] "Starting endpoint slice config controller"
	I1119 02:44:06.183018       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 02:44:06.183355       1 config.go:309] "Starting node config controller"
	I1119 02:44:06.183371       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 02:44:06.282910       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1119 02:44:06.283046       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1119 02:44:06.283105       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1119 02:44:06.283687       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [7cdb91f63703193832fa8fc84ec766b4d87e2ac3e24887dcbcb074dfdac9634d] <==
	I1119 02:44:02.675999       1 serving.go:386] Generated self-signed cert in-memory
	W1119 02:44:05.142623       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1119 02:44:05.142657       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found, role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found]
	W1119 02:44:05.142670       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1119 02:44:05.142679       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1119 02:44:05.211931       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1119 02:44:05.211965       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 02:44:05.215566       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 02:44:05.215604       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 02:44:05.216613       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1119 02:44:05.216944       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1119 02:44:05.316872       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 19 02:44:09 default-k8s-diff-port-167150 kubelet[724]: I1119 02:44:09.148179     724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/fb3343ba-3948-4d60-a357-b6b9a574f8c0-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-f7h7r\" (UID: \"fb3343ba-3948-4d60-a357-b6b9a574f8c0\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-f7h7r"
	Nov 19 02:44:09 default-k8s-diff-port-167150 kubelet[724]: I1119 02:44:09.148240     724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffz6r\" (UniqueName: \"kubernetes.io/projected/fb3343ba-3948-4d60-a357-b6b9a574f8c0-kube-api-access-ffz6r\") pod \"dashboard-metrics-scraper-6ffb444bf9-f7h7r\" (UID: \"fb3343ba-3948-4d60-a357-b6b9a574f8c0\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-f7h7r"
	Nov 19 02:44:09 default-k8s-diff-port-167150 kubelet[724]: I1119 02:44:09.148269     724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4sjm\" (UniqueName: \"kubernetes.io/projected/6f86096a-5658-426d-b3dc-6edeb5e215e9-kube-api-access-b4sjm\") pod \"kubernetes-dashboard-855c9754f9-p96nm\" (UID: \"6f86096a-5658-426d-b3dc-6edeb5e215e9\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-p96nm"
	Nov 19 02:44:09 default-k8s-diff-port-167150 kubelet[724]: I1119 02:44:09.148300     724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/6f86096a-5658-426d-b3dc-6edeb5e215e9-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-p96nm\" (UID: \"6f86096a-5658-426d-b3dc-6edeb5e215e9\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-p96nm"
	Nov 19 02:44:10 default-k8s-diff-port-167150 kubelet[724]: I1119 02:44:10.797595     724 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 19 02:44:15 default-k8s-diff-port-167150 kubelet[724]: I1119 02:44:15.529601     724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-p96nm" podStartSLOduration=2.320065993 podStartE2EDuration="6.529573786s" podCreationTimestamp="2025-11-19 02:44:09 +0000 UTC" firstStartedPulling="2025-11-19 02:44:09.4193254 +0000 UTC m=+8.238486735" lastFinishedPulling="2025-11-19 02:44:13.628833195 +0000 UTC m=+12.447994528" observedRunningTime="2025-11-19 02:44:14.419235775 +0000 UTC m=+13.238397120" watchObservedRunningTime="2025-11-19 02:44:15.529573786 +0000 UTC m=+14.348735099"
	Nov 19 02:44:17 default-k8s-diff-port-167150 kubelet[724]: I1119 02:44:17.407310     724 scope.go:117] "RemoveContainer" containerID="612f5363273342aeb336f4a770199d354a5fc3b4b39e5d551b13fe66e77c2931"
	Nov 19 02:44:18 default-k8s-diff-port-167150 kubelet[724]: I1119 02:44:18.412175     724 scope.go:117] "RemoveContainer" containerID="612f5363273342aeb336f4a770199d354a5fc3b4b39e5d551b13fe66e77c2931"
	Nov 19 02:44:18 default-k8s-diff-port-167150 kubelet[724]: I1119 02:44:18.412320     724 scope.go:117] "RemoveContainer" containerID="f150345ad791442f22b7f6b4745b2e5f2c92dbf66ad6ed2f2e2f720cbbadf497"
	Nov 19 02:44:18 default-k8s-diff-port-167150 kubelet[724]: E1119 02:44:18.412543     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-f7h7r_kubernetes-dashboard(fb3343ba-3948-4d60-a357-b6b9a574f8c0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-f7h7r" podUID="fb3343ba-3948-4d60-a357-b6b9a574f8c0"
	Nov 19 02:44:19 default-k8s-diff-port-167150 kubelet[724]: I1119 02:44:19.416165     724 scope.go:117] "RemoveContainer" containerID="f150345ad791442f22b7f6b4745b2e5f2c92dbf66ad6ed2f2e2f720cbbadf497"
	Nov 19 02:44:19 default-k8s-diff-port-167150 kubelet[724]: E1119 02:44:19.416354     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-f7h7r_kubernetes-dashboard(fb3343ba-3948-4d60-a357-b6b9a574f8c0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-f7h7r" podUID="fb3343ba-3948-4d60-a357-b6b9a574f8c0"
	Nov 19 02:44:22 default-k8s-diff-port-167150 kubelet[724]: I1119 02:44:22.481197     724 scope.go:117] "RemoveContainer" containerID="f150345ad791442f22b7f6b4745b2e5f2c92dbf66ad6ed2f2e2f720cbbadf497"
	Nov 19 02:44:22 default-k8s-diff-port-167150 kubelet[724]: E1119 02:44:22.481364     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-f7h7r_kubernetes-dashboard(fb3343ba-3948-4d60-a357-b6b9a574f8c0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-f7h7r" podUID="fb3343ba-3948-4d60-a357-b6b9a574f8c0"
	Nov 19 02:44:33 default-k8s-diff-port-167150 kubelet[724]: I1119 02:44:33.307347     724 scope.go:117] "RemoveContainer" containerID="f150345ad791442f22b7f6b4745b2e5f2c92dbf66ad6ed2f2e2f720cbbadf497"
	Nov 19 02:44:34 default-k8s-diff-port-167150 kubelet[724]: I1119 02:44:34.453904     724 scope.go:117] "RemoveContainer" containerID="f150345ad791442f22b7f6b4745b2e5f2c92dbf66ad6ed2f2e2f720cbbadf497"
	Nov 19 02:44:34 default-k8s-diff-port-167150 kubelet[724]: I1119 02:44:34.454145     724 scope.go:117] "RemoveContainer" containerID="bcec9b49dd5da59561dcca13fbccd1ddc4abfb2a5907b4002476e334cc41c669"
	Nov 19 02:44:34 default-k8s-diff-port-167150 kubelet[724]: E1119 02:44:34.454329     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-f7h7r_kubernetes-dashboard(fb3343ba-3948-4d60-a357-b6b9a574f8c0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-f7h7r" podUID="fb3343ba-3948-4d60-a357-b6b9a574f8c0"
	Nov 19 02:44:36 default-k8s-diff-port-167150 kubelet[724]: I1119 02:44:36.462761     724 scope.go:117] "RemoveContainer" containerID="1a72617903e6314079ce5b0564a60457c183f68e6e318bd4b089000d5050df80"
	Nov 19 02:44:42 default-k8s-diff-port-167150 kubelet[724]: I1119 02:44:42.480982     724 scope.go:117] "RemoveContainer" containerID="bcec9b49dd5da59561dcca13fbccd1ddc4abfb2a5907b4002476e334cc41c669"
	Nov 19 02:44:42 default-k8s-diff-port-167150 kubelet[724]: E1119 02:44:42.481153     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-f7h7r_kubernetes-dashboard(fb3343ba-3948-4d60-a357-b6b9a574f8c0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-f7h7r" podUID="fb3343ba-3948-4d60-a357-b6b9a574f8c0"
	Nov 19 02:44:54 default-k8s-diff-port-167150 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 19 02:44:54 default-k8s-diff-port-167150 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 19 02:44:54 default-k8s-diff-port-167150 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 19 02:44:54 default-k8s-diff-port-167150 systemd[1]: kubelet.service: Consumed 1.655s CPU time.
	
	
	==> kubernetes-dashboard [85d4c9fe32d3e2d0a6c2edf0b44354e46f03306212318e808dee8bf625ac5497] <==
	2025/11/19 02:44:13 Using namespace: kubernetes-dashboard
	2025/11/19 02:44:13 Using in-cluster config to connect to apiserver
	2025/11/19 02:44:13 Using secret token for csrf signing
	2025/11/19 02:44:13 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/19 02:44:13 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/19 02:44:13 Successful initial request to the apiserver, version: v1.34.1
	2025/11/19 02:44:13 Generating JWE encryption key
	2025/11/19 02:44:13 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/19 02:44:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/19 02:44:14 Initializing JWE encryption key from synchronized object
	2025/11/19 02:44:14 Creating in-cluster Sidecar client
	2025/11/19 02:44:14 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/19 02:44:14 Serving insecurely on HTTP port: 9090
	2025/11/19 02:44:44 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/19 02:44:13 Starting overwatch
	
	
	==> storage-provisioner [1a72617903e6314079ce5b0564a60457c183f68e6e318bd4b089000d5050df80] <==
	I1119 02:44:05.804826       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1119 02:44:35.807919       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [c8385e0cc96e047d9f413c86fd578dff340203a9b9a1ef224657235179328132] <==
	I1119 02:44:36.511391       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1119 02:44:36.518461       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1119 02:44:36.518500       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1119 02:44:36.520125       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:44:39.975209       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:44:44.235534       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:44:47.834224       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:44:50.887992       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:44:53.910122       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:44:53.917518       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 02:44:53.917752       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1119 02:44:53.918110       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-167150_c6291bef-03db-4dcb-845b-565c6bf2bb39!
	I1119 02:44:53.917878       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3714f73f-a3cc-42cd-ae7e-a03ea89c8e13", APIVersion:"v1", ResourceVersion:"658", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-167150_c6291bef-03db-4dcb-845b-565c6bf2bb39 became leader
	W1119 02:44:53.922086       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:44:53.926462       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 02:44:54.019737       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-167150_c6291bef-03db-4dcb-845b-565c6bf2bb39!
	W1119 02:44:55.929197       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:44:55.933149       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:44:57.937323       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:44:57.941733       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-167150 -n default-k8s-diff-port-167150
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-167150 -n default-k8s-diff-port-167150: exit status 2 (324.478967ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-167150 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (5.56s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-956139 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-956139 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (268.390179ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:44:56Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-956139 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-956139
helpers_test.go:243: (dbg) docker inspect newest-cni-956139:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9939767f1de878098e5310b942eb9d11ec3f25e3b9ab60bfa602e352f92c64c3",
	        "Created": "2025-11-19T02:44:35.029315719Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 331237,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T02:44:35.066270795Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a368e3d71517ce17114afb6c9921965419df972dd0e2d32a9973a8946f0910a3",
	        "ResolvConfPath": "/var/lib/docker/containers/9939767f1de878098e5310b942eb9d11ec3f25e3b9ab60bfa602e352f92c64c3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9939767f1de878098e5310b942eb9d11ec3f25e3b9ab60bfa602e352f92c64c3/hostname",
	        "HostsPath": "/var/lib/docker/containers/9939767f1de878098e5310b942eb9d11ec3f25e3b9ab60bfa602e352f92c64c3/hosts",
	        "LogPath": "/var/lib/docker/containers/9939767f1de878098e5310b942eb9d11ec3f25e3b9ab60bfa602e352f92c64c3/9939767f1de878098e5310b942eb9d11ec3f25e3b9ab60bfa602e352f92c64c3-json.log",
	        "Name": "/newest-cni-956139",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-956139:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-956139",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9939767f1de878098e5310b942eb9d11ec3f25e3b9ab60bfa602e352f92c64c3",
	                "LowerDir": "/var/lib/docker/overlay2/e8113759fe0dc1846824f71b5017071cba92b91383000cb8145ce591dacbc603-init/diff:/var/lib/docker/overlay2/10dcbd04dd53463a9d8dc947ef22df9efa1b3a4d07707317c3869fa39d5b22a7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e8113759fe0dc1846824f71b5017071cba92b91383000cb8145ce591dacbc603/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e8113759fe0dc1846824f71b5017071cba92b91383000cb8145ce591dacbc603/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e8113759fe0dc1846824f71b5017071cba92b91383000cb8145ce591dacbc603/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-956139",
	                "Source": "/var/lib/docker/volumes/newest-cni-956139/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-956139",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-956139",
	                "name.minikube.sigs.k8s.io": "newest-cni-956139",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "cf0ce52c4454e5714c829e340431869a5ad91bcfeffb92348ce8b888998a98f8",
	            "SandboxKey": "/var/run/docker/netns/cf0ce52c4454",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33128"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33129"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33132"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33130"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33131"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-956139": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c158809bc17a4fb99da40a7d719b98cd3e7fa529cc77ba53af7dfac4ad266e67",
	                    "EndpointID": "39b93ec4229beb97886f85a480cbb322b734002c888298aecbf1b01b89498cfb",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "aa:66:1f:9b:4a:8a",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-956139",
	                        "9939767f1de8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-956139 -n newest-cni-956139
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-956139 logs -n 25
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ stop    │ -p embed-certs-811173 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-811173           │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:43 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-167150 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-167150 │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-167150 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-167150 │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:43 UTC │
	│ addons  │ enable metrics-server -p no-preload-837474 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-837474            │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │                     │
	│ addons  │ enable dashboard -p old-k8s-version-987573 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-987573       │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:43 UTC │
	│ start   │ -p old-k8s-version-987573 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-987573       │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:44 UTC │
	│ stop    │ -p no-preload-837474 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-837474            │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:43 UTC │
	│ addons  │ enable dashboard -p embed-certs-811173 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-811173           │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:43 UTC │
	│ start   │ -p embed-certs-811173 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-811173           │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:44 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-167150 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-167150 │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:43 UTC │
	│ start   │ -p default-k8s-diff-port-167150 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-167150 │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:44 UTC │
	│ addons  │ enable dashboard -p no-preload-837474 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-837474            │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:43 UTC │
	│ start   │ -p no-preload-837474 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-837474            │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:44 UTC │
	│ image   │ old-k8s-version-987573 image list --format=json                                                                                                                                                                                               │ old-k8s-version-987573       │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │ 19 Nov 25 02:44 UTC │
	│ pause   │ -p old-k8s-version-987573 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-987573       │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │                     │
	│ delete  │ -p old-k8s-version-987573                                                                                                                                                                                                                     │ old-k8s-version-987573       │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │ 19 Nov 25 02:44 UTC │
	│ delete  │ -p old-k8s-version-987573                                                                                                                                                                                                                     │ old-k8s-version-987573       │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │ 19 Nov 25 02:44 UTC │
	│ start   │ -p newest-cni-956139 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-956139            │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │ 19 Nov 25 02:44 UTC │
	│ image   │ embed-certs-811173 image list --format=json                                                                                                                                                                                                   │ embed-certs-811173           │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │ 19 Nov 25 02:44 UTC │
	│ pause   │ -p embed-certs-811173 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-811173           │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │                     │
	│ delete  │ -p embed-certs-811173                                                                                                                                                                                                                         │ embed-certs-811173           │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │ 19 Nov 25 02:44 UTC │
	│ image   │ default-k8s-diff-port-167150 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-167150 │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │ 19 Nov 25 02:44 UTC │
	│ pause   │ -p default-k8s-diff-port-167150 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-167150 │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-956139 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-956139            │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │                     │
	│ delete  │ -p embed-certs-811173                                                                                                                                                                                                                         │ embed-certs-811173           │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │ 19 Nov 25 02:44 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 02:44:29
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 02:44:29.891671  330644 out.go:360] Setting OutFile to fd 1 ...
	I1119 02:44:29.891773  330644 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:44:29.891782  330644 out.go:374] Setting ErrFile to fd 2...
	I1119 02:44:29.891786  330644 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:44:29.892013  330644 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-11126/.minikube/bin
	I1119 02:44:29.892489  330644 out.go:368] Setting JSON to false
	I1119 02:44:29.893932  330644 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5217,"bootTime":1763515053,"procs":351,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 02:44:29.894009  330644 start.go:143] virtualization: kvm guest
	I1119 02:44:29.896106  330644 out.go:179] * [newest-cni-956139] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1119 02:44:29.897349  330644 out.go:179]   - MINIKUBE_LOCATION=21924
	I1119 02:44:29.897381  330644 notify.go:221] Checking for updates...
	I1119 02:44:29.899649  330644 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 02:44:29.900639  330644 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21924-11126/kubeconfig
	I1119 02:44:29.901703  330644 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-11126/.minikube
	I1119 02:44:29.902810  330644 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1119 02:44:29.903920  330644 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 02:44:29.905455  330644 config.go:182] Loaded profile config "default-k8s-diff-port-167150": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:44:29.905620  330644 config.go:182] Loaded profile config "embed-certs-811173": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:44:29.905735  330644 config.go:182] Loaded profile config "no-preload-837474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:44:29.905864  330644 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 02:44:29.930269  330644 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1119 02:44:29.930390  330644 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 02:44:29.990852  330644 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-19 02:44:29.980528215 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652060160 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 02:44:29.990974  330644 docker.go:319] overlay module found
	I1119 02:44:29.992680  330644 out.go:179] * Using the docker driver based on user configuration
	I1119 02:44:29.993882  330644 start.go:309] selected driver: docker
	I1119 02:44:29.993897  330644 start.go:930] validating driver "docker" against <nil>
	I1119 02:44:29.993908  330644 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 02:44:29.994485  330644 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 02:44:30.055174  330644 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-19 02:44:30.045301349 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652060160 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 02:44:30.055367  330644 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1119 02:44:30.055398  330644 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1119 02:44:30.055690  330644 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1119 02:44:30.057878  330644 out.go:179] * Using Docker driver with root privileges
	I1119 02:44:30.059068  330644 cni.go:84] Creating CNI manager for ""
	I1119 02:44:30.059130  330644 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 02:44:30.059141  330644 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1119 02:44:30.059196  330644 start.go:353] cluster config:
	{Name:newest-cni-956139 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-956139 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:44:30.060543  330644 out.go:179] * Starting "newest-cni-956139" primary control-plane node in "newest-cni-956139" cluster
	I1119 02:44:30.061681  330644 cache.go:134] Beginning downloading kic base image for docker with crio
	I1119 02:44:30.062975  330644 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1119 02:44:30.064114  330644 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 02:44:30.064143  330644 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21924-11126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1119 02:44:30.064167  330644 cache.go:65] Caching tarball of preloaded images
	I1119 02:44:30.064199  330644 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1119 02:44:30.064251  330644 preload.go:238] Found /home/jenkins/minikube-integration/21924-11126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1119 02:44:30.064266  330644 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1119 02:44:30.064364  330644 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/config.json ...
	I1119 02:44:30.064387  330644 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/config.json: {Name:mk5f6a602a7486c803f28ee981bc4fb72f30089f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:44:30.086997  330644 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1119 02:44:30.087020  330644 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1119 02:44:30.087033  330644 cache.go:243] Successfully downloaded all kic artifacts
	I1119 02:44:30.087059  330644 start.go:360] acquireMachinesLock for newest-cni-956139: {Name:mk15a132b2574a22e8a886ba5601ed901f63d00c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 02:44:30.087146  330644 start.go:364] duration metric: took 69.531µs to acquireMachinesLock for "newest-cni-956139"
	I1119 02:44:30.087169  330644 start.go:93] Provisioning new machine with config: &{Name:newest-cni-956139 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-956139 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 02:44:30.087250  330644 start.go:125] createHost starting for "" (driver="docker")
	W1119 02:44:25.920223  322722 pod_ready.go:104] pod "coredns-66bc5c9577-44bdr" is not "Ready", error: <nil>
	W1119 02:44:28.420250  322722 pod_ready.go:104] pod "coredns-66bc5c9577-44bdr" is not "Ready", error: <nil>
	W1119 02:44:30.420774  322722 pod_ready.go:104] pod "coredns-66bc5c9577-44bdr" is not "Ready", error: <nil>
	W1119 02:44:29.634283  320707 pod_ready.go:104] pod "coredns-66bc5c9577-6zqr2" is not "Ready", error: <nil>
	W1119 02:44:31.634456  320707 pod_ready.go:104] pod "coredns-66bc5c9577-6zqr2" is not "Ready", error: <nil>
	W1119 02:44:34.134853  320707 pod_ready.go:104] pod "coredns-66bc5c9577-6zqr2" is not "Ready", error: <nil>
	W1119 02:44:29.824614  321785 pod_ready.go:104] pod "coredns-66bc5c9577-bht2q" is not "Ready", error: <nil>
	W1119 02:44:31.825210  321785 pod_ready.go:104] pod "coredns-66bc5c9577-bht2q" is not "Ready", error: <nil>
	W1119 02:44:33.861933  321785 pod_ready.go:104] pod "coredns-66bc5c9577-bht2q" is not "Ready", error: <nil>
	I1119 02:44:30.090250  330644 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1119 02:44:30.090540  330644 start.go:159] libmachine.API.Create for "newest-cni-956139" (driver="docker")
	I1119 02:44:30.090580  330644 client.go:173] LocalClient.Create starting
	I1119 02:44:30.090711  330644 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem
	I1119 02:44:30.090762  330644 main.go:143] libmachine: Decoding PEM data...
	I1119 02:44:30.090788  330644 main.go:143] libmachine: Parsing certificate...
	I1119 02:44:30.090868  330644 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21924-11126/.minikube/certs/cert.pem
	I1119 02:44:30.090897  330644 main.go:143] libmachine: Decoding PEM data...
	I1119 02:44:30.090911  330644 main.go:143] libmachine: Parsing certificate...
	I1119 02:44:30.091311  330644 cli_runner.go:164] Run: docker network inspect newest-cni-956139 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1119 02:44:30.108825  330644 cli_runner.go:211] docker network inspect newest-cni-956139 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1119 02:44:30.108874  330644 network_create.go:284] running [docker network inspect newest-cni-956139] to gather additional debugging logs...
	I1119 02:44:30.108888  330644 cli_runner.go:164] Run: docker network inspect newest-cni-956139
	W1119 02:44:30.125848  330644 cli_runner.go:211] docker network inspect newest-cni-956139 returned with exit code 1
	I1119 02:44:30.125873  330644 network_create.go:287] error running [docker network inspect newest-cni-956139]: docker network inspect newest-cni-956139: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-956139 not found
	I1119 02:44:30.125887  330644 network_create.go:289] output of [docker network inspect newest-cni-956139]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-956139 not found
	
	** /stderr **
	I1119 02:44:30.126008  330644 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 02:44:30.145372  330644 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-84ce244e4c23 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:16:55:7c:db:e3:4e} reservation:<nil>}
	I1119 02:44:30.146006  330644 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-70e7d73f86d8 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:8a:64:3f:46:8e:7a} reservation:<nil>}
	I1119 02:44:30.146778  330644 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-d7ef477b5a23 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:fa:eb:22:b3:62:92} reservation:<nil>}
	I1119 02:44:30.147612  330644 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ee2320}
	I1119 02:44:30.147633  330644 network_create.go:124] attempt to create docker network newest-cni-956139 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1119 02:44:30.147689  330644 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-956139 newest-cni-956139
	I1119 02:44:30.194747  330644 network_create.go:108] docker network newest-cni-956139 192.168.76.0/24 created
	I1119 02:44:30.194772  330644 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-956139" container
	I1119 02:44:30.194838  330644 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1119 02:44:30.212175  330644 cli_runner.go:164] Run: docker volume create newest-cni-956139 --label name.minikube.sigs.k8s.io=newest-cni-956139 --label created_by.minikube.sigs.k8s.io=true
	I1119 02:44:30.229588  330644 oci.go:103] Successfully created a docker volume newest-cni-956139
	I1119 02:44:30.229664  330644 cli_runner.go:164] Run: docker run --rm --name newest-cni-956139-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-956139 --entrypoint /usr/bin/test -v newest-cni-956139:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -d /var/lib
	I1119 02:44:30.612069  330644 oci.go:107] Successfully prepared a docker volume newest-cni-956139
	I1119 02:44:30.612124  330644 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 02:44:30.612132  330644 kic.go:194] Starting extracting preloaded images to volume ...
	I1119 02:44:30.612187  330644 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21924-11126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-956139:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir
	W1119 02:44:32.919409  322722 pod_ready.go:104] pod "coredns-66bc5c9577-44bdr" is not "Ready", error: <nil>
	W1119 02:44:34.920166  322722 pod_ready.go:104] pod "coredns-66bc5c9577-44bdr" is not "Ready", error: <nil>
	I1119 02:44:34.646141  320707 pod_ready.go:94] pod "coredns-66bc5c9577-6zqr2" is "Ready"
	I1119 02:44:34.646170  320707 pod_ready.go:86] duration metric: took 35.016957338s for pod "coredns-66bc5c9577-6zqr2" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:34.648819  320707 pod_ready.go:83] waiting for pod "etcd-embed-certs-811173" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:34.831828  320707 pod_ready.go:94] pod "etcd-embed-certs-811173" is "Ready"
	I1119 02:44:34.831852  320707 pod_ready.go:86] duration metric: took 183.006168ms for pod "etcd-embed-certs-811173" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:34.834239  320707 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-811173" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:34.837643  320707 pod_ready.go:94] pod "kube-apiserver-embed-certs-811173" is "Ready"
	I1119 02:44:34.837663  320707 pod_ready.go:86] duration metric: took 3.400351ms for pod "kube-apiserver-embed-certs-811173" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:34.839329  320707 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-811173" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:34.842652  320707 pod_ready.go:94] pod "kube-controller-manager-embed-certs-811173" is "Ready"
	I1119 02:44:34.842670  320707 pod_ready.go:86] duration metric: took 3.319388ms for pod "kube-controller-manager-embed-certs-811173" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:35.032627  320707 pod_ready.go:83] waiting for pod "kube-proxy-s5bzz" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:35.432934  320707 pod_ready.go:94] pod "kube-proxy-s5bzz" is "Ready"
	I1119 02:44:35.432959  320707 pod_ready.go:86] duration metric: took 400.306652ms for pod "kube-proxy-s5bzz" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:35.633961  320707 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-811173" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:36.032469  320707 pod_ready.go:94] pod "kube-scheduler-embed-certs-811173" is "Ready"
	I1119 02:44:36.032499  320707 pod_ready.go:86] duration metric: took 398.480495ms for pod "kube-scheduler-embed-certs-811173" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:36.032511  320707 pod_ready.go:40] duration metric: took 36.406499301s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 02:44:36.080404  320707 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1119 02:44:36.082160  320707 out.go:179] * Done! kubectl is now configured to use "embed-certs-811173" cluster and "default" namespace by default
	I1119 02:44:34.960079  330644 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21924-11126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-956139:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir: (4.347852696s)
	I1119 02:44:34.960108  330644 kic.go:203] duration metric: took 4.347972861s to extract preloaded images to volume ...
	W1119 02:44:34.960206  330644 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1119 02:44:34.960254  330644 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1119 02:44:34.960300  330644 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1119 02:44:35.014083  330644 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-956139 --name newest-cni-956139 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-956139 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-956139 --network newest-cni-956139 --ip 192.168.76.2 --volume newest-cni-956139:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a
	I1119 02:44:35.325493  330644 cli_runner.go:164] Run: docker container inspect newest-cni-956139 --format={{.State.Running}}
	I1119 02:44:35.343669  330644 cli_runner.go:164] Run: docker container inspect newest-cni-956139 --format={{.State.Status}}
	I1119 02:44:35.361759  330644 cli_runner.go:164] Run: docker exec newest-cni-956139 stat /var/lib/dpkg/alternatives/iptables
	I1119 02:44:35.406925  330644 oci.go:144] the created container "newest-cni-956139" has a running status.
	I1119 02:44:35.406959  330644 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21924-11126/.minikube/machines/newest-cni-956139/id_rsa...
	I1119 02:44:35.779267  330644 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21924-11126/.minikube/machines/newest-cni-956139/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1119 02:44:35.805615  330644 cli_runner.go:164] Run: docker container inspect newest-cni-956139 --format={{.State.Status}}
	I1119 02:44:35.826512  330644 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1119 02:44:35.826530  330644 kic_runner.go:114] Args: [docker exec --privileged newest-cni-956139 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1119 02:44:35.871319  330644 cli_runner.go:164] Run: docker container inspect newest-cni-956139 --format={{.State.Status}}
	I1119 02:44:35.889991  330644 machine.go:94] provisionDockerMachine start ...
	I1119 02:44:35.890097  330644 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956139
	I1119 02:44:35.909789  330644 main.go:143] libmachine: Using SSH client type: native
	I1119 02:44:35.910136  330644 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1119 02:44:35.910158  330644 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 02:44:36.043778  330644 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-956139
	
	I1119 02:44:36.043805  330644 ubuntu.go:182] provisioning hostname "newest-cni-956139"
	I1119 02:44:36.043885  330644 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956139
	I1119 02:44:36.065697  330644 main.go:143] libmachine: Using SSH client type: native
	I1119 02:44:36.065904  330644 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1119 02:44:36.065918  330644 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-956139 && echo "newest-cni-956139" | sudo tee /etc/hostname
	I1119 02:44:36.211004  330644 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-956139
	
	I1119 02:44:36.211088  330644 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956139
	I1119 02:44:36.229392  330644 main.go:143] libmachine: Using SSH client type: native
	I1119 02:44:36.229616  330644 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1119 02:44:36.229635  330644 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-956139' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-956139/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-956139' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 02:44:36.359138  330644 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 02:44:36.359177  330644 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21924-11126/.minikube CaCertPath:/home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21924-11126/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21924-11126/.minikube}
	I1119 02:44:36.359210  330644 ubuntu.go:190] setting up certificates
	I1119 02:44:36.359219  330644 provision.go:84] configureAuth start
	I1119 02:44:36.359262  330644 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-956139
	I1119 02:44:36.381048  330644 provision.go:143] copyHostCerts
	I1119 02:44:36.381118  330644 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-11126/.minikube/key.pem, removing ...
	I1119 02:44:36.381134  330644 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-11126/.minikube/key.pem
	I1119 02:44:36.381241  330644 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21924-11126/.minikube/key.pem (1675 bytes)
	I1119 02:44:36.381393  330644 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-11126/.minikube/ca.pem, removing ...
	I1119 02:44:36.381407  330644 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-11126/.minikube/ca.pem
	I1119 02:44:36.381473  330644 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21924-11126/.minikube/ca.pem (1082 bytes)
	I1119 02:44:36.381598  330644 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-11126/.minikube/cert.pem, removing ...
	I1119 02:44:36.381613  330644 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-11126/.minikube/cert.pem
	I1119 02:44:36.381659  330644 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21924-11126/.minikube/cert.pem (1123 bytes)
	I1119 02:44:36.381762  330644 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21924-11126/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca-key.pem org=jenkins.newest-cni-956139 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-956139]
	I1119 02:44:36.425094  330644 provision.go:177] copyRemoteCerts
	I1119 02:44:36.425145  330644 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 02:44:36.425178  330644 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956139
	I1119 02:44:36.444152  330644 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/newest-cni-956139/id_rsa Username:docker}
	I1119 02:44:36.542494  330644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1119 02:44:36.560963  330644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1119 02:44:36.577617  330644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1119 02:44:36.594302  330644 provision.go:87] duration metric: took 235.073311ms to configureAuth
	I1119 02:44:36.594322  330644 ubuntu.go:206] setting minikube options for container-runtime
	I1119 02:44:36.594527  330644 config.go:182] Loaded profile config "newest-cni-956139": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:44:36.594625  330644 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956139
	I1119 02:44:36.612019  330644 main.go:143] libmachine: Using SSH client type: native
	I1119 02:44:36.612218  330644 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1119 02:44:36.612232  330644 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 02:44:36.879790  330644 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 02:44:36.879819  330644 machine.go:97] duration metric: took 989.804229ms to provisionDockerMachine
	I1119 02:44:36.879830  330644 client.go:176] duration metric: took 6.789240603s to LocalClient.Create
	I1119 02:44:36.879851  330644 start.go:167] duration metric: took 6.789312626s to libmachine.API.Create "newest-cni-956139"
	I1119 02:44:36.879860  330644 start.go:293] postStartSetup for "newest-cni-956139" (driver="docker")
	I1119 02:44:36.879872  330644 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 02:44:36.879933  330644 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 02:44:36.879968  330644 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956139
	I1119 02:44:36.898156  330644 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/newest-cni-956139/id_rsa Username:docker}
	I1119 02:44:36.993744  330644 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 02:44:36.997203  330644 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 02:44:36.997235  330644 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 02:44:36.997254  330644 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-11126/.minikube/addons for local assets ...
	I1119 02:44:36.997312  330644 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-11126/.minikube/files for local assets ...
	I1119 02:44:36.997404  330644 filesync.go:149] local asset: /home/jenkins/minikube-integration/21924-11126/.minikube/files/etc/ssl/certs/146342.pem -> 146342.pem in /etc/ssl/certs
	I1119 02:44:36.997536  330644 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 02:44:37.005305  330644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/files/etc/ssl/certs/146342.pem --> /etc/ssl/certs/146342.pem (1708 bytes)
	I1119 02:44:37.024142  330644 start.go:296] duration metric: took 144.272497ms for postStartSetup
	I1119 02:44:37.024490  330644 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-956139
	I1119 02:44:37.042142  330644 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/config.json ...
	I1119 02:44:37.042364  330644 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 02:44:37.042421  330644 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956139
	I1119 02:44:37.060279  330644 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/newest-cni-956139/id_rsa Username:docker}
	I1119 02:44:37.151155  330644 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 02:44:37.155487  330644 start.go:128] duration metric: took 7.068223226s to createHost
	I1119 02:44:37.155509  330644 start.go:83] releasing machines lock for "newest-cni-956139", held for 7.068353821s
	I1119 02:44:37.155567  330644 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-956139
	I1119 02:44:37.172738  330644 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 02:44:37.172750  330644 ssh_runner.go:195] Run: cat /version.json
	I1119 02:44:37.172802  330644 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956139
	I1119 02:44:37.172817  330644 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956139
	I1119 02:44:37.191403  330644 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/newest-cni-956139/id_rsa Username:docker}
	I1119 02:44:37.191761  330644 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/newest-cni-956139/id_rsa Username:docker}
	I1119 02:44:37.349781  330644 ssh_runner.go:195] Run: systemctl --version
	I1119 02:44:37.356447  330644 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 02:44:37.390971  330644 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 02:44:37.395386  330644 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 02:44:37.395452  330644 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 02:44:37.420966  330644 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1119 02:44:37.421000  330644 start.go:496] detecting cgroup driver to use...
	I1119 02:44:37.421031  330644 detect.go:190] detected "systemd" cgroup driver on host os
	I1119 02:44:37.421116  330644 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 02:44:37.437016  330644 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 02:44:37.448636  330644 docker.go:218] disabling cri-docker service (if available) ...
	I1119 02:44:37.448680  330644 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 02:44:37.464103  330644 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 02:44:37.483229  330644 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 02:44:37.569719  330644 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 02:44:37.663891  330644 docker.go:234] disabling docker service ...
	I1119 02:44:37.663946  330644 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 02:44:37.684672  330644 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 02:44:37.699707  330644 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 02:44:37.783938  330644 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 02:44:37.866466  330644 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 02:44:37.878906  330644 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 02:44:37.893148  330644 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 02:44:37.893200  330644 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:44:37.903765  330644 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1119 02:44:37.903825  330644 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:44:37.912380  330644 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:44:37.922240  330644 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:44:37.930944  330644 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 02:44:37.938625  330644 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:44:37.947066  330644 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:44:37.960171  330644 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:44:37.968261  330644 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 02:44:37.975267  330644 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 02:44:37.982398  330644 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:44:38.060067  330644 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 02:44:38.192960  330644 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 02:44:38.193022  330644 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 02:44:38.196763  330644 start.go:564] Will wait 60s for crictl version
	I1119 02:44:38.196824  330644 ssh_runner.go:195] Run: which crictl
	I1119 02:44:38.200161  330644 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 02:44:38.225001  330644 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1119 02:44:38.225065  330644 ssh_runner.go:195] Run: crio --version
	I1119 02:44:38.251944  330644 ssh_runner.go:195] Run: crio --version
	I1119 02:44:38.282138  330644 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1119 02:44:38.283487  330644 cli_runner.go:164] Run: docker network inspect newest-cni-956139 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 02:44:38.300312  330644 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1119 02:44:38.304280  330644 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 02:44:38.315573  330644 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1119 02:44:36.325065  321785 pod_ready.go:104] pod "coredns-66bc5c9577-bht2q" is not "Ready", error: <nil>
	W1119 02:44:38.824893  321785 pod_ready.go:104] pod "coredns-66bc5c9577-bht2q" is not "Ready", error: <nil>
	I1119 02:44:38.316650  330644 kubeadm.go:884] updating cluster {Name:newest-cni-956139 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-956139 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 02:44:38.316772  330644 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 02:44:38.316823  330644 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 02:44:38.347925  330644 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 02:44:38.347943  330644 crio.go:433] Images already preloaded, skipping extraction
	I1119 02:44:38.348024  330644 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 02:44:38.371370  330644 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 02:44:38.371386  330644 cache_images.go:86] Images are preloaded, skipping loading
	I1119 02:44:38.371393  330644 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1119 02:44:38.371489  330644 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-956139 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-956139 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 02:44:38.371568  330644 ssh_runner.go:195] Run: crio config
	I1119 02:44:38.414403  330644 cni.go:84] Creating CNI manager for ""
	I1119 02:44:38.414425  330644 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 02:44:38.414455  330644 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1119 02:44:38.414480  330644 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-956139 NodeName:newest-cni-956139 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 02:44:38.414596  330644 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-956139"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 02:44:38.414650  330644 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 02:44:38.422980  330644 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 02:44:38.423037  330644 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 02:44:38.430764  330644 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1119 02:44:38.442899  330644 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 02:44:38.457503  330644 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1119 02:44:38.470194  330644 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1119 02:44:38.473583  330644 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 02:44:38.482869  330644 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:44:38.562300  330644 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 02:44:38.585622  330644 certs.go:69] Setting up /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139 for IP: 192.168.76.2
	I1119 02:44:38.585639  330644 certs.go:195] generating shared ca certs ...
	I1119 02:44:38.585658  330644 certs.go:227] acquiring lock for ca certs: {Name:mkfa94aafd627ee4c1a185e05aa520339d3c22d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:44:38.585812  330644 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21924-11126/.minikube/ca.key
	I1119 02:44:38.585880  330644 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21924-11126/.minikube/proxy-client-ca.key
	I1119 02:44:38.585900  330644 certs.go:257] generating profile certs ...
	I1119 02:44:38.585973  330644 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/client.key
	I1119 02:44:38.585994  330644 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/client.crt with IP's: []
	I1119 02:44:38.886736  330644 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/client.crt ...
	I1119 02:44:38.886761  330644 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/client.crt: {Name:mkb981b48727217d5d544f8c1ece639a24196b8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:44:38.886914  330644 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/client.key ...
	I1119 02:44:38.886927  330644 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/client.key: {Name:mkf09d335927b94ecd83db709f24055ce131f9c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:44:38.887002  330644 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/apiserver.key.107d1e6d
	I1119 02:44:38.887016  330644 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/apiserver.crt.107d1e6d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1119 02:44:39.078031  330644 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/apiserver.crt.107d1e6d ...
	I1119 02:44:39.078059  330644 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/apiserver.crt.107d1e6d: {Name:mkcff50d0bd0e5de553650f0790abc33df1f3d40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:44:39.078203  330644 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/apiserver.key.107d1e6d ...
	I1119 02:44:39.078217  330644 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/apiserver.key.107d1e6d: {Name:mk332d91d4c4926805e4ae3abcbd91571604bef5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:44:39.078314  330644 certs.go:382] copying /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/apiserver.crt.107d1e6d -> /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/apiserver.crt
	I1119 02:44:39.078410  330644 certs.go:386] copying /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/apiserver.key.107d1e6d -> /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/apiserver.key
	I1119 02:44:39.078500  330644 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/proxy-client.key
	I1119 02:44:39.078517  330644 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/proxy-client.crt with IP's: []
	I1119 02:44:39.492473  330644 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/proxy-client.crt ...
	I1119 02:44:39.492501  330644 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/proxy-client.crt: {Name:mk2d2a0752005ddbf3ff7866b2d888f6c88921c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:44:39.492685  330644 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/proxy-client.key ...
	I1119 02:44:39.492708  330644 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/proxy-client.key: {Name:mk0676b22a9381558c3b1f8b4d9f9ded76cf6a58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:44:39.492943  330644 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/14634.pem (1338 bytes)
	W1119 02:44:39.492986  330644 certs.go:480] ignoring /home/jenkins/minikube-integration/21924-11126/.minikube/certs/14634_empty.pem, impossibly tiny 0 bytes
	I1119 02:44:39.493002  330644 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca-key.pem (1671 bytes)
	I1119 02:44:39.493035  330644 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem (1082 bytes)
	I1119 02:44:39.493063  330644 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/cert.pem (1123 bytes)
	I1119 02:44:39.493096  330644 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/key.pem (1675 bytes)
	I1119 02:44:39.493152  330644 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/files/etc/ssl/certs/146342.pem (1708 bytes)
	I1119 02:44:39.493921  330644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 02:44:39.511675  330644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1119 02:44:39.528321  330644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 02:44:39.545416  330644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1119 02:44:39.561752  330644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1119 02:44:39.578259  330644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1119 02:44:39.594332  330644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 02:44:39.610201  330644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1119 02:44:39.626532  330644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/files/etc/ssl/certs/146342.pem --> /usr/share/ca-certificates/146342.pem (1708 bytes)
	I1119 02:44:39.646920  330644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 02:44:39.663725  330644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/certs/14634.pem --> /usr/share/ca-certificates/14634.pem (1338 bytes)
	I1119 02:44:39.680824  330644 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 02:44:39.692613  330644 ssh_runner.go:195] Run: openssl version
	I1119 02:44:39.699229  330644 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/146342.pem && ln -fs /usr/share/ca-certificates/146342.pem /etc/ssl/certs/146342.pem"
	I1119 02:44:39.708084  330644 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/146342.pem
	I1119 02:44:39.711716  330644 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 02:02 /usr/share/ca-certificates/146342.pem
	I1119 02:44:39.711771  330644 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/146342.pem
	I1119 02:44:39.746645  330644 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/146342.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 02:44:39.754713  330644 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 02:44:39.762929  330644 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:44:39.766299  330644 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 01:56 /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:44:39.766335  330644 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:44:39.800570  330644 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 02:44:39.808541  330644 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14634.pem && ln -fs /usr/share/ca-certificates/14634.pem /etc/ssl/certs/14634.pem"
	I1119 02:44:39.816270  330644 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14634.pem
	I1119 02:44:39.819952  330644 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 02:02 /usr/share/ca-certificates/14634.pem
	I1119 02:44:39.819989  330644 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14634.pem
	I1119 02:44:39.854738  330644 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14634.pem /etc/ssl/certs/51391683.0"
	I1119 02:44:39.863275  330644 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 02:44:39.866811  330644 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1119 02:44:39.866866  330644 kubeadm.go:401] StartCluster: {Name:newest-cni-956139 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-956139 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:44:39.866959  330644 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 02:44:39.867032  330644 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 02:44:39.893234  330644 cri.go:89] found id: ""
	I1119 02:44:39.893298  330644 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 02:44:39.901084  330644 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1119 02:44:39.908779  330644 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1119 02:44:39.908820  330644 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1119 02:44:39.915918  330644 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1119 02:44:39.915956  330644 kubeadm.go:158] found existing configuration files:
	
	I1119 02:44:39.916000  330644 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1119 02:44:39.924150  330644 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1119 02:44:39.924192  330644 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1119 02:44:39.931134  330644 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1119 02:44:39.938135  330644 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1119 02:44:39.938182  330644 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1119 02:44:39.945082  330644 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1119 02:44:39.952377  330644 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1119 02:44:39.952425  330644 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1119 02:44:39.959861  330644 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1119 02:44:39.966757  330644 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1119 02:44:39.966801  330644 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1119 02:44:39.973926  330644 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1119 02:44:40.012094  330644 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1119 02:44:40.012170  330644 kubeadm.go:319] [preflight] Running pre-flight checks
	I1119 02:44:40.051599  330644 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1119 02:44:40.051753  330644 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1119 02:44:40.051826  330644 kubeadm.go:319] OS: Linux
	I1119 02:44:40.051888  330644 kubeadm.go:319] CGROUPS_CPU: enabled
	I1119 02:44:40.051939  330644 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1119 02:44:40.052007  330644 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1119 02:44:40.052083  330644 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1119 02:44:40.052163  330644 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1119 02:44:40.052233  330644 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1119 02:44:40.052284  330644 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1119 02:44:40.052344  330644 kubeadm.go:319] CGROUPS_IO: enabled
	I1119 02:44:40.110629  330644 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1119 02:44:40.110786  330644 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1119 02:44:40.110919  330644 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1119 02:44:40.118761  330644 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1119 02:44:37.420903  322722 pod_ready.go:104] pod "coredns-66bc5c9577-44bdr" is not "Ready", error: <nil>
	W1119 02:44:39.920505  322722 pod_ready.go:104] pod "coredns-66bc5c9577-44bdr" is not "Ready", error: <nil>
	I1119 02:44:40.823992  321785 pod_ready.go:94] pod "coredns-66bc5c9577-bht2q" is "Ready"
	I1119 02:44:40.824024  321785 pod_ready.go:86] duration metric: took 34.00468535s for pod "coredns-66bc5c9577-bht2q" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:40.826065  321785 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-167150" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:40.829510  321785 pod_ready.go:94] pod "etcd-default-k8s-diff-port-167150" is "Ready"
	I1119 02:44:40.829533  321785 pod_ready.go:86] duration metric: took 3.445845ms for pod "etcd-default-k8s-diff-port-167150" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:40.831135  321785 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-167150" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:40.834490  321785 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-167150" is "Ready"
	I1119 02:44:40.834508  321785 pod_ready.go:86] duration metric: took 3.353905ms for pod "kube-apiserver-default-k8s-diff-port-167150" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:40.836222  321785 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-167150" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:41.022776  321785 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-167150" is "Ready"
	I1119 02:44:41.022802  321785 pod_ready.go:86] duration metric: took 186.560827ms for pod "kube-controller-manager-default-k8s-diff-port-167150" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:41.222650  321785 pod_ready.go:83] waiting for pod "kube-proxy-8gl4n" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:41.623243  321785 pod_ready.go:94] pod "kube-proxy-8gl4n" is "Ready"
	I1119 02:44:41.623276  321785 pod_ready.go:86] duration metric: took 400.60046ms for pod "kube-proxy-8gl4n" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:41.823313  321785 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-167150" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:42.222639  321785 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-167150" is "Ready"
	I1119 02:44:42.222665  321785 pod_ready.go:86] duration metric: took 399.326737ms for pod "kube-scheduler-default-k8s-diff-port-167150" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:42.222675  321785 pod_ready.go:40] duration metric: took 35.410146964s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 02:44:42.265461  321785 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1119 02:44:42.267962  321785 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-167150" cluster and "default" namespace by default
	I1119 02:44:40.120572  330644 out.go:252]   - Generating certificates and keys ...
	I1119 02:44:40.120676  330644 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1119 02:44:40.120767  330644 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1119 02:44:40.285783  330644 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1119 02:44:40.596128  330644 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1119 02:44:40.775594  330644 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1119 02:44:40.856728  330644 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1119 02:44:41.447992  330644 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1119 02:44:41.448141  330644 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-956139] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1119 02:44:42.120936  330644 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1119 02:44:42.121139  330644 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-956139] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1119 02:44:42.400506  330644 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1119 02:44:42.544344  330644 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1119 02:44:42.820587  330644 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1119 02:44:42.820689  330644 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1119 02:44:42.995265  330644 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1119 02:44:43.162291  330644 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1119 02:44:43.196763  330644 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1119 02:44:43.556128  330644 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1119 02:44:43.787728  330644 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1119 02:44:43.788303  330644 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1119 02:44:43.792218  330644 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1119 02:44:43.793609  330644 out.go:252]   - Booting up control plane ...
	I1119 02:44:43.793714  330644 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1119 02:44:43.793818  330644 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1119 02:44:43.794447  330644 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1119 02:44:43.811365  330644 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1119 02:44:43.811606  330644 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1119 02:44:43.817701  330644 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1119 02:44:43.818010  330644 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1119 02:44:43.818083  330644 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1119 02:44:43.912675  330644 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1119 02:44:43.912849  330644 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	W1119 02:44:42.419894  322722 pod_ready.go:104] pod "coredns-66bc5c9577-44bdr" is not "Ready", error: <nil>
	W1119 02:44:44.921381  322722 pod_ready.go:104] pod "coredns-66bc5c9577-44bdr" is not "Ready", error: <nil>
	I1119 02:44:46.419827  322722 pod_ready.go:94] pod "coredns-66bc5c9577-44bdr" is "Ready"
	I1119 02:44:46.419857  322722 pod_ready.go:86] duration metric: took 38.00494675s for pod "coredns-66bc5c9577-44bdr" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:46.422128  322722 pod_ready.go:83] waiting for pod "etcd-no-preload-837474" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:46.425877  322722 pod_ready.go:94] pod "etcd-no-preload-837474" is "Ready"
	I1119 02:44:46.425901  322722 pod_ready.go:86] duration metric: took 3.744715ms for pod "etcd-no-preload-837474" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:46.427596  322722 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-837474" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:46.430915  322722 pod_ready.go:94] pod "kube-apiserver-no-preload-837474" is "Ready"
	I1119 02:44:46.430936  322722 pod_ready.go:86] duration metric: took 3.318971ms for pod "kube-apiserver-no-preload-837474" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:46.432827  322722 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-837474" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:46.619267  322722 pod_ready.go:94] pod "kube-controller-manager-no-preload-837474" is "Ready"
	I1119 02:44:46.619298  322722 pod_ready.go:86] duration metric: took 186.448054ms for pod "kube-controller-manager-no-preload-837474" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:46.819349  322722 pod_ready.go:83] waiting for pod "kube-proxy-hmxzk" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:47.219089  322722 pod_ready.go:94] pod "kube-proxy-hmxzk" is "Ready"
	I1119 02:44:47.219115  322722 pod_ready.go:86] duration metric: took 399.745795ms for pod "kube-proxy-hmxzk" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:47.418899  322722 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-837474" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:47.819293  322722 pod_ready.go:94] pod "kube-scheduler-no-preload-837474" is "Ready"
	I1119 02:44:47.819318  322722 pod_ready.go:86] duration metric: took 400.396392ms for pod "kube-scheduler-no-preload-837474" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:44:47.819332  322722 pod_ready.go:40] duration metric: took 39.409998426s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 02:44:47.882918  322722 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1119 02:44:47.884667  322722 out.go:179] * Done! kubectl is now configured to use "no-preload-837474" cluster and "default" namespace by default
	I1119 02:44:44.914267  330644 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001584412s
	I1119 02:44:44.919834  330644 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1119 02:44:44.919954  330644 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1119 02:44:44.920098  330644 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1119 02:44:44.920202  330644 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1119 02:44:46.082445  330644 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.162579737s
	I1119 02:44:46.762642  330644 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.842786839s
	I1119 02:44:48.421451  330644 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501654588s
	I1119 02:44:48.432989  330644 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1119 02:44:48.442965  330644 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1119 02:44:48.450246  330644 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1119 02:44:48.450564  330644 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-956139 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1119 02:44:48.457630  330644 kubeadm.go:319] [bootstrap-token] Using token: bpq1za.q7wy15mme3dprzfy
	I1119 02:44:48.458785  330644 out.go:252]   - Configuring RBAC rules ...
	I1119 02:44:48.458936  330644 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1119 02:44:48.461935  330644 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1119 02:44:48.466914  330644 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1119 02:44:48.469590  330644 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1119 02:44:48.472718  330644 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1119 02:44:48.475031  330644 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1119 02:44:48.827275  330644 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1119 02:44:49.241863  330644 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1119 02:44:49.827545  330644 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1119 02:44:49.828386  330644 kubeadm.go:319] 
	I1119 02:44:49.828472  330644 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1119 02:44:49.828485  330644 kubeadm.go:319] 
	I1119 02:44:49.828608  330644 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1119 02:44:49.828625  330644 kubeadm.go:319] 
	I1119 02:44:49.828650  330644 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1119 02:44:49.828731  330644 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1119 02:44:49.828818  330644 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1119 02:44:49.828832  330644 kubeadm.go:319] 
	I1119 02:44:49.828906  330644 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1119 02:44:49.828916  330644 kubeadm.go:319] 
	I1119 02:44:49.828980  330644 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1119 02:44:49.828990  330644 kubeadm.go:319] 
	I1119 02:44:49.829055  330644 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1119 02:44:49.829166  330644 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1119 02:44:49.829226  330644 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1119 02:44:49.829233  330644 kubeadm.go:319] 
	I1119 02:44:49.829341  330644 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1119 02:44:49.829450  330644 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1119 02:44:49.829464  330644 kubeadm.go:319] 
	I1119 02:44:49.829567  330644 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token bpq1za.q7wy15mme3dprzfy \
	I1119 02:44:49.829694  330644 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4165d7789cb5b41e2fdbecffedd9aece71a7c390b7f673c978e979d9f1b4ab32 \
	I1119 02:44:49.829727  330644 kubeadm.go:319] 	--control-plane 
	I1119 02:44:49.829737  330644 kubeadm.go:319] 
	I1119 02:44:49.829830  330644 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1119 02:44:49.829840  330644 kubeadm.go:319] 
	I1119 02:44:49.829940  330644 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token bpq1za.q7wy15mme3dprzfy \
	I1119 02:44:49.830063  330644 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4165d7789cb5b41e2fdbecffedd9aece71a7c390b7f673c978e979d9f1b4ab32 
	I1119 02:44:49.832633  330644 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1119 02:44:49.832729  330644 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1119 02:44:49.832752  330644 cni.go:84] Creating CNI manager for ""
	I1119 02:44:49.832761  330644 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 02:44:49.834994  330644 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1119 02:44:49.836244  330644 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1119 02:44:49.840560  330644 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1119 02:44:49.840576  330644 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1119 02:44:49.852577  330644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1119 02:44:50.080027  330644 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1119 02:44:50.080080  330644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:44:50.080111  330644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-956139 minikube.k8s.io/updated_at=2025_11_19T02_44_50_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=ebd49efe8b7d44f89fc96e21beffcd64dfee8277 minikube.k8s.io/name=newest-cni-956139 minikube.k8s.io/primary=true
	I1119 02:44:50.181807  330644 ops.go:34] apiserver oom_adj: -16
	I1119 02:44:50.183726  330644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:44:50.684625  330644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:44:51.184631  330644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:44:51.684630  330644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:44:52.184401  330644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:44:52.684596  330644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:44:53.183868  330644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:44:53.683849  330644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:44:54.184175  330644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:44:54.684642  330644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:44:55.184680  330644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:44:55.255079  330644 kubeadm.go:1114] duration metric: took 5.175044255s to wait for elevateKubeSystemPrivileges
	I1119 02:44:55.255111  330644 kubeadm.go:403] duration metric: took 15.388250216s to StartCluster
	I1119 02:44:55.255131  330644 settings.go:142] acquiring lock: {Name:mk39bd61177f2f8808b442f427cccc3032092975 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:44:55.255207  330644 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21924-11126/kubeconfig
	I1119 02:44:55.257307  330644 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/kubeconfig: {Name:mkb0d0ec188d51add3fa67ae624c9c11581068d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:44:55.257611  330644 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 02:44:55.257651  330644 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1119 02:44:55.257666  330644 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 02:44:55.257759  330644 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-956139"
	I1119 02:44:55.257779  330644 addons.go:70] Setting default-storageclass=true in profile "newest-cni-956139"
	I1119 02:44:55.257784  330644 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-956139"
	I1119 02:44:55.257825  330644 host.go:66] Checking if "newest-cni-956139" exists ...
	I1119 02:44:55.257829  330644 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-956139"
	I1119 02:44:55.257852  330644 config.go:182] Loaded profile config "newest-cni-956139": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:44:55.258176  330644 cli_runner.go:164] Run: docker container inspect newest-cni-956139 --format={{.State.Status}}
	I1119 02:44:55.258487  330644 cli_runner.go:164] Run: docker container inspect newest-cni-956139 --format={{.State.Status}}
	I1119 02:44:55.259164  330644 out.go:179] * Verifying Kubernetes components...
	I1119 02:44:55.261074  330644 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:44:55.287881  330644 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 02:44:55.288607  330644 addons.go:239] Setting addon default-storageclass=true in "newest-cni-956139"
	I1119 02:44:55.288655  330644 host.go:66] Checking if "newest-cni-956139" exists ...
	I1119 02:44:55.288995  330644 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 02:44:55.289013  330644 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 02:44:55.289063  330644 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956139
	I1119 02:44:55.289112  330644 cli_runner.go:164] Run: docker container inspect newest-cni-956139 --format={{.State.Status}}
	I1119 02:44:55.320315  330644 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 02:44:55.320506  330644 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 02:44:55.320689  330644 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956139
	I1119 02:44:55.327680  330644 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/newest-cni-956139/id_rsa Username:docker}
	I1119 02:44:55.349806  330644 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/newest-cni-956139/id_rsa Username:docker}
	I1119 02:44:55.379730  330644 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1119 02:44:55.451250  330644 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 02:44:55.457636  330644 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 02:44:55.480367  330644 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 02:44:55.612641  330644 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1119 02:44:55.614898  330644 api_server.go:52] waiting for apiserver process to appear ...
	I1119 02:44:55.614959  330644 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 02:44:55.811014  330644 api_server.go:72] duration metric: took 553.367498ms to wait for apiserver process to appear ...
	I1119 02:44:55.811040  330644 api_server.go:88] waiting for apiserver healthz status ...
	I1119 02:44:55.811059  330644 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 02:44:55.819776  330644 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1119 02:44:55.820578  330644 api_server.go:141] control plane version: v1.34.1
	I1119 02:44:55.820609  330644 api_server.go:131] duration metric: took 9.561354ms to wait for apiserver health ...
	I1119 02:44:55.820618  330644 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 02:44:55.822755  330644 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1119 02:44:55.823417  330644 system_pods.go:59] 8 kube-system pods found
	I1119 02:44:55.823483  330644 system_pods.go:61] "coredns-66bc5c9577-l7vmx" [0d704d05-424c-4c54-bdf6-a5ec01cbcbf8] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1119 02:44:55.823503  330644 system_pods.go:61] "etcd-newest-cni-956139" [724e0280-bcab-4c1e-aae3-5a7a72519d23] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1119 02:44:55.823514  330644 system_pods.go:61] "kindnet-s65nc" [20583cba-5129-470f-b6f9-869642b28f93] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1119 02:44:55.823537  330644 system_pods.go:61] "kube-apiserver-newest-cni-956139" [a81fa4fa-fea5-4996-9230-94e06fb3b276] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1119 02:44:55.823553  330644 system_pods.go:61] "kube-controller-manager-newest-cni-956139" [a93f6b9a-946c-4099-bbc0-139db17304e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1119 02:44:55.823562  330644 system_pods.go:61] "kube-proxy-7frpm" [7f447bc0-73e5-4008-b474-551b69553ce3] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1119 02:44:55.823583  330644 system_pods.go:61] "kube-scheduler-newest-cni-956139" [ebd7110b-7108-4bca-b86d-c7126087da9f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1119 02:44:55.823592  330644 system_pods.go:61] "storage-provisioner" [b8a81262-3433-4dd4-a802-58a9b4440545] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1119 02:44:55.823603  330644 system_pods.go:74] duration metric: took 2.978578ms to wait for pod list to return data ...
	I1119 02:44:55.823616  330644 default_sa.go:34] waiting for default service account to be created ...
	I1119 02:44:55.824202  330644 addons.go:515] duration metric: took 566.533433ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1119 02:44:55.825650  330644 default_sa.go:45] found service account: "default"
	I1119 02:44:55.825669  330644 default_sa.go:55] duration metric: took 2.044637ms for default service account to be created ...
	I1119 02:44:55.825682  330644 kubeadm.go:587] duration metric: took 568.038142ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1119 02:44:55.825701  330644 node_conditions.go:102] verifying NodePressure condition ...
	I1119 02:44:55.827786  330644 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1119 02:44:55.827811  330644 node_conditions.go:123] node cpu capacity is 8
	I1119 02:44:55.827828  330644 node_conditions.go:105] duration metric: took 2.120628ms to run NodePressure ...
	I1119 02:44:55.827844  330644 start.go:242] waiting for startup goroutines ...
	I1119 02:44:56.120226  330644 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-956139" context rescaled to 1 replicas
	I1119 02:44:56.120268  330644 start.go:247] waiting for cluster config update ...
	I1119 02:44:56.120378  330644 start.go:256] writing updated cluster config ...
	I1119 02:44:56.120780  330644 ssh_runner.go:195] Run: rm -f paused
	I1119 02:44:56.183785  330644 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1119 02:44:56.185393  330644 out.go:179] * Done! kubectl is now configured to use "newest-cni-956139" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 19 02:44:55 newest-cni-956139 crio[781]: time="2025-11-19T02:44:55.131611035Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:44:55 newest-cni-956139 crio[781]: time="2025-11-19T02:44:55.134298632Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=50fff1af-c001-4fdb-a1b2-5adb218fac52 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 02:44:55 newest-cni-956139 crio[781]: time="2025-11-19T02:44:55.135025442Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=1f0ccd7b-dec5-4c52-833d-ae5430b0de93 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 02:44:55 newest-cni-956139 crio[781]: time="2025-11-19T02:44:55.135947872Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 19 02:44:55 newest-cni-956139 crio[781]: time="2025-11-19T02:44:55.136317399Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 19 02:44:55 newest-cni-956139 crio[781]: time="2025-11-19T02:44:55.136867435Z" level=info msg="Ran pod sandbox d376d5eaf4e7d479d81867fe22fe7c5cbfba58d41aceab88f3ea745a4353441b with infra container: kube-system/kube-proxy-7frpm/POD" id=50fff1af-c001-4fdb-a1b2-5adb218fac52 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 02:44:55 newest-cni-956139 crio[781]: time="2025-11-19T02:44:55.137078356Z" level=info msg="Ran pod sandbox 2ae78e5e1799713be449e2fc8a5aa94916aff2baa24b87f7ab76c9aadae3d11a with infra container: kube-system/kindnet-s65nc/POD" id=1f0ccd7b-dec5-4c52-833d-ae5430b0de93 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 02:44:55 newest-cni-956139 crio[781]: time="2025-11-19T02:44:55.137923775Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=d8f45410-0ec8-4763-a8ec-03e52500a17d name=/runtime.v1.ImageService/ImageStatus
	Nov 19 02:44:55 newest-cni-956139 crio[781]: time="2025-11-19T02:44:55.137981896Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=c2da5c99-13a8-42ad-9c50-e2c7ce74eb5e name=/runtime.v1.ImageService/ImageStatus
	Nov 19 02:44:55 newest-cni-956139 crio[781]: time="2025-11-19T02:44:55.138883576Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=14c714d2-3988-46ad-82aa-9135c0e9194d name=/runtime.v1.ImageService/ImageStatus
	Nov 19 02:44:55 newest-cni-956139 crio[781]: time="2025-11-19T02:44:55.138910621Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=866e5bba-734e-43ff-85d3-8a8e27aef0c2 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 02:44:55 newest-cni-956139 crio[781]: time="2025-11-19T02:44:55.142688448Z" level=info msg="Creating container: kube-system/kindnet-s65nc/kindnet-cni" id=a36f2c4f-7c4c-4887-824c-7adc39417808 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 02:44:55 newest-cni-956139 crio[781]: time="2025-11-19T02:44:55.142764602Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:44:55 newest-cni-956139 crio[781]: time="2025-11-19T02:44:55.14389118Z" level=info msg="Creating container: kube-system/kube-proxy-7frpm/kube-proxy" id=08440346-f797-4b1a-8c82-2b8360416c6e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 02:44:55 newest-cni-956139 crio[781]: time="2025-11-19T02:44:55.143973453Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:44:55 newest-cni-956139 crio[781]: time="2025-11-19T02:44:55.147197964Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:44:55 newest-cni-956139 crio[781]: time="2025-11-19T02:44:55.147653802Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:44:55 newest-cni-956139 crio[781]: time="2025-11-19T02:44:55.14913891Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:44:55 newest-cni-956139 crio[781]: time="2025-11-19T02:44:55.149608836Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:44:55 newest-cni-956139 crio[781]: time="2025-11-19T02:44:55.17013999Z" level=info msg="Created container 6147343595ebab06c3bca41adc051e039760ad80e43fc06e4a2ea031517f875e: kube-system/kindnet-s65nc/kindnet-cni" id=a36f2c4f-7c4c-4887-824c-7adc39417808 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 02:44:55 newest-cni-956139 crio[781]: time="2025-11-19T02:44:55.170802058Z" level=info msg="Starting container: 6147343595ebab06c3bca41adc051e039760ad80e43fc06e4a2ea031517f875e" id=1b392bb9-578b-4058-874f-ca72d497e628 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 02:44:55 newest-cni-956139 crio[781]: time="2025-11-19T02:44:55.172842809Z" level=info msg="Started container" PID=1534 containerID=6147343595ebab06c3bca41adc051e039760ad80e43fc06e4a2ea031517f875e description=kube-system/kindnet-s65nc/kindnet-cni id=1b392bb9-578b-4058-874f-ca72d497e628 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2ae78e5e1799713be449e2fc8a5aa94916aff2baa24b87f7ab76c9aadae3d11a
	Nov 19 02:44:55 newest-cni-956139 crio[781]: time="2025-11-19T02:44:55.174266736Z" level=info msg="Created container 9595b300e72f9434863da102f2bd1f4bd1ad225139cb2a6aad4dd5886e550826: kube-system/kube-proxy-7frpm/kube-proxy" id=08440346-f797-4b1a-8c82-2b8360416c6e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 02:44:55 newest-cni-956139 crio[781]: time="2025-11-19T02:44:55.174808352Z" level=info msg="Starting container: 9595b300e72f9434863da102f2bd1f4bd1ad225139cb2a6aad4dd5886e550826" id=2947dae9-c058-4d0e-a56b-f04328928a27 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 02:44:55 newest-cni-956139 crio[781]: time="2025-11-19T02:44:55.177554791Z" level=info msg="Started container" PID=1535 containerID=9595b300e72f9434863da102f2bd1f4bd1ad225139cb2a6aad4dd5886e550826 description=kube-system/kube-proxy-7frpm/kube-proxy id=2947dae9-c058-4d0e-a56b-f04328928a27 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d376d5eaf4e7d479d81867fe22fe7c5cbfba58d41aceab88f3ea745a4353441b
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	9595b300e72f9       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   2 seconds ago       Running             kube-proxy                0                   d376d5eaf4e7d       kube-proxy-7frpm                            kube-system
	6147343595eba       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   2 seconds ago       Running             kindnet-cni               0                   2ae78e5e17997       kindnet-s65nc                               kube-system
	9d19216777e87       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   12 seconds ago      Running             kube-apiserver            0                   65b95917725e1       kube-apiserver-newest-cni-956139            kube-system
	b603b56f9e6d9       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   12 seconds ago      Running             kube-controller-manager   0                   b903a4040ba31       kube-controller-manager-newest-cni-956139   kube-system
	bdf7f732bd687       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   12 seconds ago      Running             etcd                      0                   aec6be816f463       etcd-newest-cni-956139                      kube-system
	4077b8453d887       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   12 seconds ago      Running             kube-scheduler            0                   58777a91c0527       kube-scheduler-newest-cni-956139            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-956139
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-956139
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ebd49efe8b7d44f89fc96e21beffcd64dfee8277
	                    minikube.k8s.io/name=newest-cni-956139
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T02_44_50_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 02:44:46 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-956139
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 02:44:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 02:44:49 +0000   Wed, 19 Nov 2025 02:44:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 02:44:49 +0000   Wed, 19 Nov 2025 02:44:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 02:44:49 +0000   Wed, 19 Nov 2025 02:44:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Wed, 19 Nov 2025 02:44:49 +0000   Wed, 19 Nov 2025 02:44:45 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-956139
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863340Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863340Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf10fb2f940d419c1d138723691cfee8
	  System UUID:                cae1bba0-7daf-47af-a2b2-8c3f8909ef7d
	  Boot ID:                    b74e7f3d-91af-4c1b-82f6-1745dfd2e678
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-956139                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         8s
	  kube-system                 kindnet-s65nc                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      3s
	  kube-system                 kube-apiserver-newest-cni-956139             250m (3%)     0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-controller-manager-newest-cni-956139    200m (2%)     0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-proxy-7frpm                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	  kube-system                 kube-scheduler-newest-cni-956139             100m (1%)     0 (0%)      0 (0%)           0 (0%)         8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 2s    kube-proxy       
	  Normal  Starting                 8s    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8s    kubelet          Node newest-cni-956139 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s    kubelet          Node newest-cni-956139 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s    kubelet          Node newest-cni-956139 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4s    node-controller  Node newest-cni-956139 event: Registered Node newest-cni-956139 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: c6 2a 10 38 98 ce 7e af 25 93 50 8b 08 00
	[Nov19 02:40] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 69 19 13 d2 34 08 06
	[  +0.000303] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff de 82 c7 57 ef 49 08 06
	[Nov19 02:41] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 82 b6 eb cd 7e a7 08 06
	[  +0.001170] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 6a 20 a4 3b 82 10 08 06
	[ +12.842438] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 1e 35 97 65 5e 2e 08 06
	[  +4.187285] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ba 20 d4 3c 25 0c 08 06
	[ +19.742639] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6e e8 d1 08 45 d2 08 06
	[  +0.000327] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ba 20 d4 3c 25 0c 08 06
	[Nov19 02:42] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 2b 58 8a 05 dc 08 06
	[  +0.000340] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 b6 eb cd 7e a7 08 06
	[ +10.661146] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b6 1d bb 8d c6 48 08 06
	[  +0.000382] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 1e 35 97 65 5e 2e 08 06
	
	
	==> etcd [bdf7f732bd68703eeccf2dfdad9f10d01bffd736f5ccdaff2f68a8c2bbe33eab] <==
	{"level":"warn","ts":"2025-11-19T02:44:46.117674Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:46.126318Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:46.132427Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:46.138897Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:46.149643Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:46.155107Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:46.160813Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:46.166268Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:46.172095Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:46.178592Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:46.184454Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:46.190688Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:46.196667Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:46.203044Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:46.209023Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:46.215473Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:46.221542Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:46.227476Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:46.233817Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:46.240009Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:46.245993Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:46.261699Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:46.267917Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:46.274923Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:46.328745Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59036","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 02:44:57 up  1:27,  0 user,  load average: 3.11, 3.26, 2.28
	Linux newest-cni-956139 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6147343595ebab06c3bca41adc051e039760ad80e43fc06e4a2ea031517f875e] <==
	I1119 02:44:55.415127       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 02:44:55.415427       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1119 02:44:55.415603       1 main.go:148] setting mtu 1500 for CNI 
	I1119 02:44:55.415617       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 02:44:55.415630       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T02:44:55Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 02:44:55.710779       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 02:44:55.710852       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 02:44:55.710905       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 02:44:55.711085       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1119 02:44:56.111567       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1119 02:44:56.111684       1 metrics.go:72] Registering metrics
	I1119 02:44:56.111815       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [9d19216777e87acb3d081c4f5fb514295c54c1f86407542f40a320f1d9f9ef9e] <==
	E1119 02:44:46.858909       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	E1119 02:44:46.884994       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1119 02:44:46.907245       1 controller.go:667] quota admission added evaluator for: namespaces
	I1119 02:44:46.909457       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1119 02:44:46.909538       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 02:44:46.913721       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 02:44:46.913980       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1119 02:44:47.087804       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 02:44:47.714003       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1119 02:44:47.717858       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1119 02:44:47.717933       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 02:44:48.234047       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1119 02:44:48.271193       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1119 02:44:48.315997       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1119 02:44:48.321658       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1119 02:44:48.322552       1 controller.go:667] quota admission added evaluator for: endpoints
	I1119 02:44:48.326228       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1119 02:44:48.759056       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1119 02:44:49.233048       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1119 02:44:49.240993       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1119 02:44:49.248856       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1119 02:44:54.414592       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 02:44:54.418893       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 02:44:54.510903       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1119 02:44:54.810495       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [b603b56f9e6d9dbe2dddf5bc92bb79fa761beb806d3ea351ca72f9a37c0abc01] <==
	I1119 02:44:53.742038       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1119 02:44:53.757200       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1119 02:44:53.757371       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1119 02:44:53.757618       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1119 02:44:53.757972       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-956139"
	I1119 02:44:53.758110       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1119 02:44:53.758181       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1119 02:44:53.758054       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1119 02:44:53.758363       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1119 02:44:53.758392       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1119 02:44:53.758478       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1119 02:44:53.758800       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1119 02:44:53.759027       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1119 02:44:53.759063       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1119 02:44:53.759472       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1119 02:44:53.759589       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1119 02:44:53.759802       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1119 02:44:53.760258       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1119 02:44:53.761280       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1119 02:44:53.761980       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1119 02:44:53.762004       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1119 02:44:53.767242       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 02:44:53.775469       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1119 02:44:53.780622       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1119 02:44:53.785939       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [9595b300e72f9434863da102f2bd1f4bd1ad225139cb2a6aad4dd5886e550826] <==
	I1119 02:44:55.214591       1 server_linux.go:53] "Using iptables proxy"
	I1119 02:44:55.281375       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 02:44:55.383932       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 02:44:55.384104       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1119 02:44:55.384352       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 02:44:55.426047       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 02:44:55.426117       1 server_linux.go:132] "Using iptables Proxier"
	I1119 02:44:55.433102       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 02:44:55.433663       1 server.go:527] "Version info" version="v1.34.1"
	I1119 02:44:55.433702       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 02:44:55.442110       1 config.go:200] "Starting service config controller"
	I1119 02:44:55.442758       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 02:44:55.442180       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 02:44:55.442848       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 02:44:55.442200       1 config.go:106] "Starting endpoint slice config controller"
	I1119 02:44:55.442920       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 02:44:55.442270       1 config.go:309] "Starting node config controller"
	I1119 02:44:55.442971       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 02:44:55.442998       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 02:44:55.542975       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1119 02:44:55.543001       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1119 02:44:55.543017       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [4077b8453d88759b0f227b72779142f5d4eb0113b330866903d07a6dce5cd916] <==
	E1119 02:44:46.759946       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1119 02:44:46.759615       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1119 02:44:46.760047       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1119 02:44:46.760074       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1119 02:44:46.760136       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1119 02:44:46.760154       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1119 02:44:46.760335       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1119 02:44:46.760467       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1119 02:44:46.760501       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1119 02:44:46.760644       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1119 02:44:46.761608       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1119 02:44:47.605325       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1119 02:44:47.633581       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1119 02:44:47.713062       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1119 02:44:47.755691       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1119 02:44:47.788282       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1119 02:44:47.791550       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1119 02:44:47.816126       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1119 02:44:47.841078       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1119 02:44:47.880618       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1119 02:44:47.882584       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1119 02:44:47.892650       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1119 02:44:47.941927       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1119 02:44:48.012414       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	I1119 02:44:50.456407       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 19 02:44:50 newest-cni-956139 kubelet[1331]: I1119 02:44:50.054909    1331 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 19 02:44:50 newest-cni-956139 kubelet[1331]: I1119 02:44:50.085645    1331 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-956139"
	Nov 19 02:44:50 newest-cni-956139 kubelet[1331]: I1119 02:44:50.085911    1331 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-956139"
	Nov 19 02:44:50 newest-cni-956139 kubelet[1331]: I1119 02:44:50.086165    1331 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-956139"
	Nov 19 02:44:50 newest-cni-956139 kubelet[1331]: I1119 02:44:50.086404    1331 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-956139"
	Nov 19 02:44:50 newest-cni-956139 kubelet[1331]: E1119 02:44:50.093588    1331 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-956139\" already exists" pod="kube-system/kube-scheduler-newest-cni-956139"
	Nov 19 02:44:50 newest-cni-956139 kubelet[1331]: E1119 02:44:50.094990    1331 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-956139\" already exists" pod="kube-system/kube-apiserver-newest-cni-956139"
	Nov 19 02:44:50 newest-cni-956139 kubelet[1331]: E1119 02:44:50.095044    1331 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-956139\" already exists" pod="kube-system/etcd-newest-cni-956139"
	Nov 19 02:44:50 newest-cni-956139 kubelet[1331]: E1119 02:44:50.095198    1331 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-956139\" already exists" pod="kube-system/kube-controller-manager-newest-cni-956139"
	Nov 19 02:44:50 newest-cni-956139 kubelet[1331]: I1119 02:44:50.126953    1331 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-956139" podStartSLOduration=1.126929158 podStartE2EDuration="1.126929158s" podCreationTimestamp="2025-11-19 02:44:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 02:44:50.117272109 +0000 UTC m=+1.126527549" watchObservedRunningTime="2025-11-19 02:44:50.126929158 +0000 UTC m=+1.136184601"
	Nov 19 02:44:50 newest-cni-956139 kubelet[1331]: I1119 02:44:50.137323    1331 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-956139" podStartSLOduration=1.137299834 podStartE2EDuration="1.137299834s" podCreationTimestamp="2025-11-19 02:44:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 02:44:50.127124358 +0000 UTC m=+1.136379793" watchObservedRunningTime="2025-11-19 02:44:50.137299834 +0000 UTC m=+1.146555275"
	Nov 19 02:44:50 newest-cni-956139 kubelet[1331]: I1119 02:44:50.148066    1331 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-956139" podStartSLOduration=1.148044631 podStartE2EDuration="1.148044631s" podCreationTimestamp="2025-11-19 02:44:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 02:44:50.137477434 +0000 UTC m=+1.146732858" watchObservedRunningTime="2025-11-19 02:44:50.148044631 +0000 UTC m=+1.157300071"
	Nov 19 02:44:50 newest-cni-956139 kubelet[1331]: I1119 02:44:50.159809    1331 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-956139" podStartSLOduration=1.159785657 podStartE2EDuration="1.159785657s" podCreationTimestamp="2025-11-19 02:44:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 02:44:50.148326998 +0000 UTC m=+1.157582426" watchObservedRunningTime="2025-11-19 02:44:50.159785657 +0000 UTC m=+1.169041097"
	Nov 19 02:44:53 newest-cni-956139 kubelet[1331]: I1119 02:44:53.793461    1331 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 19 02:44:53 newest-cni-956139 kubelet[1331]: I1119 02:44:53.794230    1331 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 19 02:44:54 newest-cni-956139 kubelet[1331]: I1119 02:44:54.900659    1331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/20583cba-5129-470f-b6f9-869642b28f93-cni-cfg\") pod \"kindnet-s65nc\" (UID: \"20583cba-5129-470f-b6f9-869642b28f93\") " pod="kube-system/kindnet-s65nc"
	Nov 19 02:44:54 newest-cni-956139 kubelet[1331]: I1119 02:44:54.900709    1331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nccvp\" (UniqueName: \"kubernetes.io/projected/20583cba-5129-470f-b6f9-869642b28f93-kube-api-access-nccvp\") pod \"kindnet-s65nc\" (UID: \"20583cba-5129-470f-b6f9-869642b28f93\") " pod="kube-system/kindnet-s65nc"
	Nov 19 02:44:54 newest-cni-956139 kubelet[1331]: I1119 02:44:54.900730    1331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7f447bc0-73e5-4008-b474-551b69553ce3-kube-proxy\") pod \"kube-proxy-7frpm\" (UID: \"7f447bc0-73e5-4008-b474-551b69553ce3\") " pod="kube-system/kube-proxy-7frpm"
	Nov 19 02:44:54 newest-cni-956139 kubelet[1331]: I1119 02:44:54.900750    1331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7f447bc0-73e5-4008-b474-551b69553ce3-lib-modules\") pod \"kube-proxy-7frpm\" (UID: \"7f447bc0-73e5-4008-b474-551b69553ce3\") " pod="kube-system/kube-proxy-7frpm"
	Nov 19 02:44:54 newest-cni-956139 kubelet[1331]: I1119 02:44:54.900774    1331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/20583cba-5129-470f-b6f9-869642b28f93-xtables-lock\") pod \"kindnet-s65nc\" (UID: \"20583cba-5129-470f-b6f9-869642b28f93\") " pod="kube-system/kindnet-s65nc"
	Nov 19 02:44:54 newest-cni-956139 kubelet[1331]: I1119 02:44:54.900792    1331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/20583cba-5129-470f-b6f9-869642b28f93-lib-modules\") pod \"kindnet-s65nc\" (UID: \"20583cba-5129-470f-b6f9-869642b28f93\") " pod="kube-system/kindnet-s65nc"
	Nov 19 02:44:54 newest-cni-956139 kubelet[1331]: I1119 02:44:54.900919    1331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7f447bc0-73e5-4008-b474-551b69553ce3-xtables-lock\") pod \"kube-proxy-7frpm\" (UID: \"7f447bc0-73e5-4008-b474-551b69553ce3\") " pod="kube-system/kube-proxy-7frpm"
	Nov 19 02:44:54 newest-cni-956139 kubelet[1331]: I1119 02:44:54.900951    1331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgfdk\" (UniqueName: \"kubernetes.io/projected/7f447bc0-73e5-4008-b474-551b69553ce3-kube-api-access-xgfdk\") pod \"kube-proxy-7frpm\" (UID: \"7f447bc0-73e5-4008-b474-551b69553ce3\") " pod="kube-system/kube-proxy-7frpm"
	Nov 19 02:44:56 newest-cni-956139 kubelet[1331]: I1119 02:44:56.132934    1331 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7frpm" podStartSLOduration=2.132908304 podStartE2EDuration="2.132908304s" podCreationTimestamp="2025-11-19 02:44:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 02:44:56.120232362 +0000 UTC m=+7.129487818" watchObservedRunningTime="2025-11-19 02:44:56.132908304 +0000 UTC m=+7.142163747"
	Nov 19 02:44:56 newest-cni-956139 kubelet[1331]: I1119 02:44:56.133065    1331 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-s65nc" podStartSLOduration=2.133060345 podStartE2EDuration="2.133060345s" podCreationTimestamp="2025-11-19 02:44:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 02:44:56.132541271 +0000 UTC m=+7.141796711" watchObservedRunningTime="2025-11-19 02:44:56.133060345 +0000 UTC m=+7.142315787"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-956139 -n newest-cni-956139
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-956139 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-l7vmx storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-956139 describe pod coredns-66bc5c9577-l7vmx storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-956139 describe pod coredns-66bc5c9577-l7vmx storage-provisioner: exit status 1 (55.988208ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-l7vmx" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-956139 describe pod coredns-66bc5c9577-l7vmx storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (5.71s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-837474 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p no-preload-837474 --alsologtostderr -v=1: exit status 80 (2.268776675s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-837474 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 02:44:59.657316  338512 out.go:360] Setting OutFile to fd 1 ...
	I1119 02:44:59.657595  338512 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:44:59.657605  338512 out.go:374] Setting ErrFile to fd 2...
	I1119 02:44:59.657609  338512 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:44:59.657812  338512 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-11126/.minikube/bin
	I1119 02:44:59.658030  338512 out.go:368] Setting JSON to false
	I1119 02:44:59.658069  338512 mustload.go:66] Loading cluster: no-preload-837474
	I1119 02:44:59.658388  338512 config.go:182] Loaded profile config "no-preload-837474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:44:59.658802  338512 cli_runner.go:164] Run: docker container inspect no-preload-837474 --format={{.State.Status}}
	I1119 02:44:59.677058  338512 host.go:66] Checking if "no-preload-837474" exists ...
	I1119 02:44:59.677486  338512 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 02:44:59.738961  338512 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-19 02:44:59.728713257 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652060160 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 02:44:59.739615  338512 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-837474 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1119 02:44:59.741970  338512 out.go:179] * Pausing node no-preload-837474 ... 
	I1119 02:44:59.743314  338512 host.go:66] Checking if "no-preload-837474" exists ...
	I1119 02:44:59.743620  338512 ssh_runner.go:195] Run: systemctl --version
	I1119 02:44:59.743668  338512 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-837474
	I1119 02:44:59.763175  338512 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/no-preload-837474/id_rsa Username:docker}
	I1119 02:44:59.860587  338512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 02:44:59.883293  338512 pause.go:52] kubelet running: true
	I1119 02:44:59.883389  338512 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1119 02:45:00.062880  338512 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1119 02:45:00.062995  338512 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1119 02:45:00.127342  338512 cri.go:89] found id: "13ad6da0bd5e63f9266596b090ebf10732740126bcb0b39f60bde9195f6d11fd"
	I1119 02:45:00.127363  338512 cri.go:89] found id: "0ece14c0d2989fd84a8306761a16f42605844ab5403451efb55760cdf31a20a0"
	I1119 02:45:00.127367  338512 cri.go:89] found id: "bedbafff131c81000054173f696af6ba2d4a9350b8cbdd2985d33561fa58f639"
	I1119 02:45:00.127370  338512 cri.go:89] found id: "07e1d6dcc863e8f7d82680f4ba851550938cc3f8b8fb1b8080aac242af438836"
	I1119 02:45:00.127373  338512 cri.go:89] found id: "473b9088f9191f71c546e80811be04f683a56318e61de78d2d9edc9173aac7de"
	I1119 02:45:00.127376  338512 cri.go:89] found id: "6dd757954ee069960d7775b0cb8053165f8ed7b87e78e24e092a5d8d6ad8c518"
	I1119 02:45:00.127378  338512 cri.go:89] found id: "348a7baf54addbf4a9c81030950fa886111d02619363237a83c83efe031b6e4e"
	I1119 02:45:00.127381  338512 cri.go:89] found id: "e25eec2afaa5d216ff068aae46bf36572a21229c3f7eba57128ac16e1b16a13a"
	I1119 02:45:00.127383  338512 cri.go:89] found id: "70ad4cd08b245fb372615d7c559ce529ef762f5d44fc541f9bc7000ebd69b651"
	I1119 02:45:00.127389  338512 cri.go:89] found id: "423bb4796dfff9afc99af43831f4867bd0432e350c892931fcb4c9a9f59f6ae0"
	I1119 02:45:00.127391  338512 cri.go:89] found id: "bf02d90b6bbfa6d0799547f56beede01425bff1af868efbb4db1bed287e9ed6a"
	I1119 02:45:00.127394  338512 cri.go:89] found id: ""
	I1119 02:45:00.127464  338512 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 02:45:00.138787  338512 retry.go:31] will retry after 176.949477ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:45:00Z" level=error msg="open /run/runc: no such file or directory"
	I1119 02:45:00.316247  338512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 02:45:00.333050  338512 pause.go:52] kubelet running: false
	I1119 02:45:00.333112  338512 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1119 02:45:00.491363  338512 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1119 02:45:00.491427  338512 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1119 02:45:00.556787  338512 cri.go:89] found id: "13ad6da0bd5e63f9266596b090ebf10732740126bcb0b39f60bde9195f6d11fd"
	I1119 02:45:00.556807  338512 cri.go:89] found id: "0ece14c0d2989fd84a8306761a16f42605844ab5403451efb55760cdf31a20a0"
	I1119 02:45:00.556812  338512 cri.go:89] found id: "bedbafff131c81000054173f696af6ba2d4a9350b8cbdd2985d33561fa58f639"
	I1119 02:45:00.556816  338512 cri.go:89] found id: "07e1d6dcc863e8f7d82680f4ba851550938cc3f8b8fb1b8080aac242af438836"
	I1119 02:45:00.556820  338512 cri.go:89] found id: "473b9088f9191f71c546e80811be04f683a56318e61de78d2d9edc9173aac7de"
	I1119 02:45:00.556823  338512 cri.go:89] found id: "6dd757954ee069960d7775b0cb8053165f8ed7b87e78e24e092a5d8d6ad8c518"
	I1119 02:45:00.556825  338512 cri.go:89] found id: "348a7baf54addbf4a9c81030950fa886111d02619363237a83c83efe031b6e4e"
	I1119 02:45:00.556827  338512 cri.go:89] found id: "e25eec2afaa5d216ff068aae46bf36572a21229c3f7eba57128ac16e1b16a13a"
	I1119 02:45:00.556830  338512 cri.go:89] found id: "70ad4cd08b245fb372615d7c559ce529ef762f5d44fc541f9bc7000ebd69b651"
	I1119 02:45:00.556840  338512 cri.go:89] found id: "423bb4796dfff9afc99af43831f4867bd0432e350c892931fcb4c9a9f59f6ae0"
	I1119 02:45:00.556844  338512 cri.go:89] found id: "bf02d90b6bbfa6d0799547f56beede01425bff1af868efbb4db1bed287e9ed6a"
	I1119 02:45:00.556846  338512 cri.go:89] found id: ""
	I1119 02:45:00.556881  338512 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 02:45:00.568523  338512 retry.go:31] will retry after 262.88394ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:45:00Z" level=error msg="open /run/runc: no such file or directory"
	I1119 02:45:00.832019  338512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 02:45:00.844936  338512 pause.go:52] kubelet running: false
	I1119 02:45:00.844983  338512 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1119 02:45:00.987145  338512 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1119 02:45:00.987210  338512 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1119 02:45:01.061572  338512 cri.go:89] found id: "13ad6da0bd5e63f9266596b090ebf10732740126bcb0b39f60bde9195f6d11fd"
	I1119 02:45:01.061596  338512 cri.go:89] found id: "0ece14c0d2989fd84a8306761a16f42605844ab5403451efb55760cdf31a20a0"
	I1119 02:45:01.061601  338512 cri.go:89] found id: "bedbafff131c81000054173f696af6ba2d4a9350b8cbdd2985d33561fa58f639"
	I1119 02:45:01.061604  338512 cri.go:89] found id: "07e1d6dcc863e8f7d82680f4ba851550938cc3f8b8fb1b8080aac242af438836"
	I1119 02:45:01.061607  338512 cri.go:89] found id: "473b9088f9191f71c546e80811be04f683a56318e61de78d2d9edc9173aac7de"
	I1119 02:45:01.061610  338512 cri.go:89] found id: "6dd757954ee069960d7775b0cb8053165f8ed7b87e78e24e092a5d8d6ad8c518"
	I1119 02:45:01.061612  338512 cri.go:89] found id: "348a7baf54addbf4a9c81030950fa886111d02619363237a83c83efe031b6e4e"
	I1119 02:45:01.061615  338512 cri.go:89] found id: "e25eec2afaa5d216ff068aae46bf36572a21229c3f7eba57128ac16e1b16a13a"
	I1119 02:45:01.061617  338512 cri.go:89] found id: "70ad4cd08b245fb372615d7c559ce529ef762f5d44fc541f9bc7000ebd69b651"
	I1119 02:45:01.061622  338512 cri.go:89] found id: "423bb4796dfff9afc99af43831f4867bd0432e350c892931fcb4c9a9f59f6ae0"
	I1119 02:45:01.061624  338512 cri.go:89] found id: "bf02d90b6bbfa6d0799547f56beede01425bff1af868efbb4db1bed287e9ed6a"
	I1119 02:45:01.061626  338512 cri.go:89] found id: ""
	I1119 02:45:01.061665  338512 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 02:45:01.074459  338512 retry.go:31] will retry after 466.70023ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:45:01Z" level=error msg="open /run/runc: no such file or directory"
	I1119 02:45:01.542178  338512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 02:45:01.555921  338512 pause.go:52] kubelet running: false
	I1119 02:45:01.555980  338512 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1119 02:45:01.745590  338512 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1119 02:45:01.745675  338512 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1119 02:45:01.822891  338512 cri.go:89] found id: "13ad6da0bd5e63f9266596b090ebf10732740126bcb0b39f60bde9195f6d11fd"
	I1119 02:45:01.822915  338512 cri.go:89] found id: "0ece14c0d2989fd84a8306761a16f42605844ab5403451efb55760cdf31a20a0"
	I1119 02:45:01.822920  338512 cri.go:89] found id: "bedbafff131c81000054173f696af6ba2d4a9350b8cbdd2985d33561fa58f639"
	I1119 02:45:01.822924  338512 cri.go:89] found id: "07e1d6dcc863e8f7d82680f4ba851550938cc3f8b8fb1b8080aac242af438836"
	I1119 02:45:01.822927  338512 cri.go:89] found id: "473b9088f9191f71c546e80811be04f683a56318e61de78d2d9edc9173aac7de"
	I1119 02:45:01.822931  338512 cri.go:89] found id: "6dd757954ee069960d7775b0cb8053165f8ed7b87e78e24e092a5d8d6ad8c518"
	I1119 02:45:01.822935  338512 cri.go:89] found id: "348a7baf54addbf4a9c81030950fa886111d02619363237a83c83efe031b6e4e"
	I1119 02:45:01.822939  338512 cri.go:89] found id: "e25eec2afaa5d216ff068aae46bf36572a21229c3f7eba57128ac16e1b16a13a"
	I1119 02:45:01.822943  338512 cri.go:89] found id: "70ad4cd08b245fb372615d7c559ce529ef762f5d44fc541f9bc7000ebd69b651"
	I1119 02:45:01.822949  338512 cri.go:89] found id: "423bb4796dfff9afc99af43831f4867bd0432e350c892931fcb4c9a9f59f6ae0"
	I1119 02:45:01.822953  338512 cri.go:89] found id: "bf02d90b6bbfa6d0799547f56beede01425bff1af868efbb4db1bed287e9ed6a"
	I1119 02:45:01.822956  338512 cri.go:89] found id: ""
	I1119 02:45:01.823010  338512 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 02:45:01.846085  338512 out.go:203] 
	W1119 02:45:01.848522  338512 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:45:01Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:45:01Z" level=error msg="open /run/runc: no such file or directory"
	
	W1119 02:45:01.848547  338512 out.go:285] * 
	* 
	W1119 02:45:01.854700  338512 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1119 02:45:01.857951  338512 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p no-preload-837474 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-837474
helpers_test.go:243: (dbg) docker inspect no-preload-837474:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "778842a2abfda801d58e4b01b6627acceed2d2c8fd6cafbd93f5fa99edd1118b",
	        "Created": "2025-11-19T02:42:31.131345889Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 323033,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T02:43:55.670058111Z",
	            "FinishedAt": "2025-11-19T02:43:54.700318358Z"
	        },
	        "Image": "sha256:a368e3d71517ce17114afb6c9921965419df972dd0e2d32a9973a8946f0910a3",
	        "ResolvConfPath": "/var/lib/docker/containers/778842a2abfda801d58e4b01b6627acceed2d2c8fd6cafbd93f5fa99edd1118b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/778842a2abfda801d58e4b01b6627acceed2d2c8fd6cafbd93f5fa99edd1118b/hostname",
	        "HostsPath": "/var/lib/docker/containers/778842a2abfda801d58e4b01b6627acceed2d2c8fd6cafbd93f5fa99edd1118b/hosts",
	        "LogPath": "/var/lib/docker/containers/778842a2abfda801d58e4b01b6627acceed2d2c8fd6cafbd93f5fa99edd1118b/778842a2abfda801d58e4b01b6627acceed2d2c8fd6cafbd93f5fa99edd1118b-json.log",
	        "Name": "/no-preload-837474",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-837474:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-837474",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "778842a2abfda801d58e4b01b6627acceed2d2c8fd6cafbd93f5fa99edd1118b",
	                "LowerDir": "/var/lib/docker/overlay2/ff0558b4fd157e4fb015cbb400d3d61ed321012cbc9a2d31ec55e90dd718f480-init/diff:/var/lib/docker/overlay2/10dcbd04dd53463a9d8dc947ef22df9efa1b3a4d07707317c3869fa39d5b22a7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ff0558b4fd157e4fb015cbb400d3d61ed321012cbc9a2d31ec55e90dd718f480/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ff0558b4fd157e4fb015cbb400d3d61ed321012cbc9a2d31ec55e90dd718f480/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ff0558b4fd157e4fb015cbb400d3d61ed321012cbc9a2d31ec55e90dd718f480/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-837474",
	                "Source": "/var/lib/docker/volumes/no-preload-837474/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-837474",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-837474",
	                "name.minikube.sigs.k8s.io": "no-preload-837474",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "d5cdce2bfca5eaa4d18d5bf05e41359f29d3bbcb425c276f1a13770ca5195d3d",
	            "SandboxKey": "/var/run/docker/netns/d5cdce2bfca5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33123"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33124"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33127"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33125"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33126"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-837474": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d5cdcd8e70bb42a339d48cf84add8542874b20a7d1d91c5ad0bc5b1415ad92cb",
	                    "EndpointID": "fe08729209473e900b34bde82176d683056286b90f22233987d877d4ddd6a895",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "e6:71:cc:96:ab:bd",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-837474",
	                        "778842a2abfd"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-837474 -n no-preload-837474
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-837474 -n no-preload-837474: exit status 2 (340.113247ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-837474 logs -n 25
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-987573 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-987573       │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:44 UTC │
	│ stop    │ -p no-preload-837474 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-837474            │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:43 UTC │
	│ addons  │ enable dashboard -p embed-certs-811173 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-811173           │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:43 UTC │
	│ start   │ -p embed-certs-811173 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-811173           │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:44 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-167150 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-167150 │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:43 UTC │
	│ start   │ -p default-k8s-diff-port-167150 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-167150 │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:44 UTC │
	│ addons  │ enable dashboard -p no-preload-837474 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-837474            │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:43 UTC │
	│ start   │ -p no-preload-837474 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-837474            │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:44 UTC │
	│ image   │ old-k8s-version-987573 image list --format=json                                                                                                                                                                                               │ old-k8s-version-987573       │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │ 19 Nov 25 02:44 UTC │
	│ pause   │ -p old-k8s-version-987573 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-987573       │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │                     │
	│ delete  │ -p old-k8s-version-987573                                                                                                                                                                                                                     │ old-k8s-version-987573       │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │ 19 Nov 25 02:44 UTC │
	│ delete  │ -p old-k8s-version-987573                                                                                                                                                                                                                     │ old-k8s-version-987573       │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │ 19 Nov 25 02:44 UTC │
	│ start   │ -p newest-cni-956139 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-956139            │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │ 19 Nov 25 02:44 UTC │
	│ image   │ embed-certs-811173 image list --format=json                                                                                                                                                                                                   │ embed-certs-811173           │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │ 19 Nov 25 02:44 UTC │
	│ pause   │ -p embed-certs-811173 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-811173           │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │                     │
	│ delete  │ -p embed-certs-811173                                                                                                                                                                                                                         │ embed-certs-811173           │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │ 19 Nov 25 02:44 UTC │
	│ image   │ default-k8s-diff-port-167150 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-167150 │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │ 19 Nov 25 02:44 UTC │
	│ pause   │ -p default-k8s-diff-port-167150 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-167150 │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-956139 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-956139            │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │                     │
	│ delete  │ -p embed-certs-811173                                                                                                                                                                                                                         │ embed-certs-811173           │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │ 19 Nov 25 02:44 UTC │
	│ stop    │ -p newest-cni-956139 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-956139            │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │ 19 Nov 25 02:45 UTC │
	│ image   │ no-preload-837474 image list --format=json                                                                                                                                                                                                    │ no-preload-837474            │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │ 19 Nov 25 02:44 UTC │
	│ pause   │ -p no-preload-837474 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-837474            │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │                     │
	│ addons  │ enable dashboard -p newest-cni-956139 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-956139            │ jenkins │ v1.37.0 │ 19 Nov 25 02:45 UTC │ 19 Nov 25 02:45 UTC │
	│ start   │ -p newest-cni-956139 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-956139            │ jenkins │ v1.37.0 │ 19 Nov 25 02:45 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 02:45:01
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 02:45:01.079633  339411 out.go:360] Setting OutFile to fd 1 ...
	I1119 02:45:01.079896  339411 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:45:01.079905  339411 out.go:374] Setting ErrFile to fd 2...
	I1119 02:45:01.079910  339411 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:45:01.080082  339411 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-11126/.minikube/bin
	I1119 02:45:01.080519  339411 out.go:368] Setting JSON to false
	I1119 02:45:01.081542  339411 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5248,"bootTime":1763515053,"procs":261,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 02:45:01.081600  339411 start.go:143] virtualization: kvm guest
	I1119 02:45:01.083571  339411 out.go:179] * [newest-cni-956139] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1119 02:45:01.087840  339411 notify.go:221] Checking for updates...
	I1119 02:45:01.089172  339411 out.go:179]   - MINIKUBE_LOCATION=21924
	I1119 02:45:01.090249  339411 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 02:45:01.091223  339411 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21924-11126/kubeconfig
	I1119 02:45:01.092319  339411 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-11126/.minikube
	I1119 02:45:01.093387  339411 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1119 02:45:01.094361  339411 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 02:45:01.095805  339411 config.go:182] Loaded profile config "newest-cni-956139": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:45:01.096303  339411 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 02:45:01.118824  339411 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1119 02:45:01.118911  339411 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 02:45:01.181407  339411 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:false NGoroutines:68 SystemTime:2025-11-19 02:45:01.171333163 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652060160 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 02:45:01.181546  339411 docker.go:319] overlay module found
	I1119 02:45:01.183364  339411 out.go:179] * Using the docker driver based on existing profile
	I1119 02:45:01.184618  339411 start.go:309] selected driver: docker
	I1119 02:45:01.184633  339411 start.go:930] validating driver "docker" against &{Name:newest-cni-956139 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-956139 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:45:01.184704  339411 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 02:45:01.185227  339411 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 02:45:01.247035  339411 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:43 OomKillDisable:false NGoroutines:62 SystemTime:2025-11-19 02:45:01.237379667 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652060160 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 02:45:01.247406  339411 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1119 02:45:01.247459  339411 cni.go:84] Creating CNI manager for ""
	I1119 02:45:01.247540  339411 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 02:45:01.247593  339411 start.go:353] cluster config:
	{Name:newest-cni-956139 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-956139 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:45:01.250071  339411 out.go:179] * Starting "newest-cni-956139" primary control-plane node in "newest-cni-956139" cluster
	I1119 02:45:01.251167  339411 cache.go:134] Beginning downloading kic base image for docker with crio
	I1119 02:45:01.252290  339411 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1119 02:45:01.253285  339411 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 02:45:01.253315  339411 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21924-11126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1119 02:45:01.253324  339411 cache.go:65] Caching tarball of preloaded images
	I1119 02:45:01.253382  339411 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1119 02:45:01.253423  339411 preload.go:238] Found /home/jenkins/minikube-integration/21924-11126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1119 02:45:01.253478  339411 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1119 02:45:01.253611  339411 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/config.json ...
	I1119 02:45:01.272884  339411 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1119 02:45:01.272903  339411 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1119 02:45:01.272916  339411 cache.go:243] Successfully downloaded all kic artifacts
	I1119 02:45:01.272936  339411 start.go:360] acquireMachinesLock for newest-cni-956139: {Name:mk15a132b2574a22e8a886ba5601ed901f63d00c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 02:45:01.272994  339411 start.go:364] duration metric: took 34.51µs to acquireMachinesLock for "newest-cni-956139"
	I1119 02:45:01.273009  339411 start.go:96] Skipping create...Using existing machine configuration
	I1119 02:45:01.273016  339411 fix.go:54] fixHost starting: 
	I1119 02:45:01.273196  339411 cli_runner.go:164] Run: docker container inspect newest-cni-956139 --format={{.State.Status}}
	I1119 02:45:01.289647  339411 fix.go:112] recreateIfNeeded on newest-cni-956139: state=Stopped err=<nil>
	W1119 02:45:01.289674  339411 fix.go:138] unexpected machine state, will restart: <nil>
	
	
	==> CRI-O <==
	Nov 19 02:44:19 no-preload-837474 crio[568]: time="2025-11-19T02:44:19.689651224Z" level=info msg="Started container" PID=1730 containerID=7fa55dd64ea5ea85dfe50bc0e21f0b8c225d3fc962df394ae297ab993dff0e45 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-z5mvf/dashboard-metrics-scraper id=e70dc06f-2a61-4363-bd54-f5ec0252e802 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a1fbd6af4990cba7a23fa0f427e7505ab0f325db17d9d024c37095222ed79756
	Nov 19 02:44:20 no-preload-837474 crio[568]: time="2025-11-19T02:44:20.642156467Z" level=info msg="Removing container: c3ef147f11e7853a0b7f0c7e61291aae54cafa7f5fc041da0163c1e18ccc5a18" id=97c3ab62-2f8c-41c7-97e7-5bb211c21486 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 19 02:44:20 no-preload-837474 crio[568]: time="2025-11-19T02:44:20.652117293Z" level=info msg="Removed container c3ef147f11e7853a0b7f0c7e61291aae54cafa7f5fc041da0163c1e18ccc5a18: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-z5mvf/dashboard-metrics-scraper" id=97c3ab62-2f8c-41c7-97e7-5bb211c21486 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 19 02:44:37 no-preload-837474 crio[568]: time="2025-11-19T02:44:37.533232276Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=2a99776e-5229-46af-a398-89ad519417ff name=/runtime.v1.ImageService/ImageStatus
	Nov 19 02:44:37 no-preload-837474 crio[568]: time="2025-11-19T02:44:37.53408035Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=d8d5e53e-3fb0-4569-ab1b-04a28d9d8e6a name=/runtime.v1.ImageService/ImageStatus
	Nov 19 02:44:37 no-preload-837474 crio[568]: time="2025-11-19T02:44:37.535132827Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-z5mvf/dashboard-metrics-scraper" id=5b414989-f41f-4ffc-a908-11da5a61c046 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 02:44:37 no-preload-837474 crio[568]: time="2025-11-19T02:44:37.535265877Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:44:37 no-preload-837474 crio[568]: time="2025-11-19T02:44:37.540771886Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:44:37 no-preload-837474 crio[568]: time="2025-11-19T02:44:37.541183834Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:44:37 no-preload-837474 crio[568]: time="2025-11-19T02:44:37.570913034Z" level=info msg="Created container 423bb4796dfff9afc99af43831f4867bd0432e350c892931fcb4c9a9f59f6ae0: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-z5mvf/dashboard-metrics-scraper" id=5b414989-f41f-4ffc-a908-11da5a61c046 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 02:44:37 no-preload-837474 crio[568]: time="2025-11-19T02:44:37.571467201Z" level=info msg="Starting container: 423bb4796dfff9afc99af43831f4867bd0432e350c892931fcb4c9a9f59f6ae0" id=c0e29cf9-68c5-4ba8-9a19-2c850baad246 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 02:44:37 no-preload-837474 crio[568]: time="2025-11-19T02:44:37.57302002Z" level=info msg="Started container" PID=1740 containerID=423bb4796dfff9afc99af43831f4867bd0432e350c892931fcb4c9a9f59f6ae0 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-z5mvf/dashboard-metrics-scraper id=c0e29cf9-68c5-4ba8-9a19-2c850baad246 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a1fbd6af4990cba7a23fa0f427e7505ab0f325db17d9d024c37095222ed79756
	Nov 19 02:44:37 no-preload-837474 crio[568]: time="2025-11-19T02:44:37.686407093Z" level=info msg="Removing container: 7fa55dd64ea5ea85dfe50bc0e21f0b8c225d3fc962df394ae297ab993dff0e45" id=93021a0b-73b2-4429-83d5-1202c3387dd1 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 19 02:44:37 no-preload-837474 crio[568]: time="2025-11-19T02:44:37.695374042Z" level=info msg="Removed container 7fa55dd64ea5ea85dfe50bc0e21f0b8c225d3fc962df394ae297ab993dff0e45: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-z5mvf/dashboard-metrics-scraper" id=93021a0b-73b2-4429-83d5-1202c3387dd1 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 19 02:44:38 no-preload-837474 crio[568]: time="2025-11-19T02:44:38.69195963Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=7e6f0dde-0ac7-40d3-bc50-7c397a000ada name=/runtime.v1.ImageService/ImageStatus
	Nov 19 02:44:38 no-preload-837474 crio[568]: time="2025-11-19T02:44:38.69278016Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=730cd505-5374-4851-a685-83e2e3ca3d4d name=/runtime.v1.ImageService/ImageStatus
	Nov 19 02:44:38 no-preload-837474 crio[568]: time="2025-11-19T02:44:38.693736382Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=e7926ecb-1bb8-4264-ad7a-3cf77dc5f639 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 02:44:38 no-preload-837474 crio[568]: time="2025-11-19T02:44:38.69386466Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:44:38 no-preload-837474 crio[568]: time="2025-11-19T02:44:38.699285164Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:44:38 no-preload-837474 crio[568]: time="2025-11-19T02:44:38.699477582Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/df584b1c0279f560e5ef7b312a2e22c8f61379c0af1d028629850b198d60b92c/merged/etc/passwd: no such file or directory"
	Nov 19 02:44:38 no-preload-837474 crio[568]: time="2025-11-19T02:44:38.699512153Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/df584b1c0279f560e5ef7b312a2e22c8f61379c0af1d028629850b198d60b92c/merged/etc/group: no such file or directory"
	Nov 19 02:44:38 no-preload-837474 crio[568]: time="2025-11-19T02:44:38.699787659Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:44:38 no-preload-837474 crio[568]: time="2025-11-19T02:44:38.726895503Z" level=info msg="Created container 13ad6da0bd5e63f9266596b090ebf10732740126bcb0b39f60bde9195f6d11fd: kube-system/storage-provisioner/storage-provisioner" id=e7926ecb-1bb8-4264-ad7a-3cf77dc5f639 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 02:44:38 no-preload-837474 crio[568]: time="2025-11-19T02:44:38.727397931Z" level=info msg="Starting container: 13ad6da0bd5e63f9266596b090ebf10732740126bcb0b39f60bde9195f6d11fd" id=94b0121f-6b39-4660-b80c-72e3dd483f47 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 02:44:38 no-preload-837474 crio[568]: time="2025-11-19T02:44:38.729108664Z" level=info msg="Started container" PID=1754 containerID=13ad6da0bd5e63f9266596b090ebf10732740126bcb0b39f60bde9195f6d11fd description=kube-system/storage-provisioner/storage-provisioner id=94b0121f-6b39-4660-b80c-72e3dd483f47 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3687ee4f5f54bfbe3155afb11f50aedcb931539161985a538ab1a0741ef03e36
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	13ad6da0bd5e6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           24 seconds ago      Running             storage-provisioner         1                   3687ee4f5f54b       storage-provisioner                          kube-system
	423bb4796dfff       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           25 seconds ago      Exited              dashboard-metrics-scraper   2                   a1fbd6af4990c       dashboard-metrics-scraper-6ffb444bf9-z5mvf   kubernetes-dashboard
	bf02d90b6bbfa       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   46 seconds ago      Running             kubernetes-dashboard        0                   4e8ab732c7bb3       kubernetes-dashboard-855c9754f9-8rhqr        kubernetes-dashboard
	b65de636abece       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           54 seconds ago      Running             busybox                     1                   2476591a6d642       busybox                                      default
	0ece14c0d2989       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           54 seconds ago      Running             coredns                     0                   e1d7c42b75d35       coredns-66bc5c9577-44bdr                     kube-system
	bedbafff131c8       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           54 seconds ago      Running             kindnet-cni                 0                   9e2e8b8986e11       kindnet-96d7l                                kube-system
	07e1d6dcc863e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           54 seconds ago      Exited              storage-provisioner         0                   3687ee4f5f54b       storage-provisioner                          kube-system
	473b9088f9191       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           54 seconds ago      Running             kube-proxy                  0                   d4dd6c8cef036       kube-proxy-hmxzk                             kube-system
	6dd757954ee06       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           59 seconds ago      Running             kube-scheduler              0                   773a1e1ff0c2f       kube-scheduler-no-preload-837474             kube-system
	348a7baf54add       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           59 seconds ago      Running             kube-controller-manager     0                   2ec1603fe8b50       kube-controller-manager-no-preload-837474    kube-system
	e25eec2afaa5d       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           59 seconds ago      Running             kube-apiserver              0                   498ebfb2cb989       kube-apiserver-no-preload-837474             kube-system
	70ad4cd08b245       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           59 seconds ago      Running             etcd                        0                   b19c52d4b8a84       etcd-no-preload-837474                       kube-system
	
	
	==> coredns [0ece14c0d2989fd84a8306761a16f42605844ab5403451efb55760cdf31a20a0] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35863 - 34326 "HINFO IN 419189069344077813.6948578736365676640. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.451018566s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-837474
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-837474
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ebd49efe8b7d44f89fc96e21beffcd64dfee8277
	                    minikube.k8s.io/name=no-preload-837474
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T02_43_08_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 02:43:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-837474
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 02:44:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 02:44:47 +0000   Wed, 19 Nov 2025 02:43:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 02:44:47 +0000   Wed, 19 Nov 2025 02:43:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 02:44:47 +0000   Wed, 19 Nov 2025 02:43:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 02:44:47 +0000   Wed, 19 Nov 2025 02:43:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    no-preload-837474
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863340Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863340Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf10fb2f940d419c1d138723691cfee8
	  System UUID:                1196f62d-ee96-4bda-889c-0da66532b529
	  Boot ID:                    b74e7f3d-91af-4c1b-82f6-1745dfd2e678
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-66bc5c9577-44bdr                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     109s
	  kube-system                 etcd-no-preload-837474                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         115s
	  kube-system                 kindnet-96d7l                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      110s
	  kube-system                 kube-apiserver-no-preload-837474              250m (3%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-controller-manager-no-preload-837474     200m (2%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-proxy-hmxzk                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-scheduler-no-preload-837474              100m (1%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-z5mvf    0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-8rhqr         0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 108s               kube-proxy       
	  Normal  Starting                 54s                kube-proxy       
	  Normal  Starting                 2m                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m (x8 over 2m)    kubelet          Node no-preload-837474 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m (x8 over 2m)    kubelet          Node no-preload-837474 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m (x8 over 2m)    kubelet          Node no-preload-837474 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    115s               kubelet          Node no-preload-837474 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  115s               kubelet          Node no-preload-837474 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     115s               kubelet          Node no-preload-837474 status is now: NodeHasSufficientPID
	  Normal  Starting                 115s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           111s               node-controller  Node no-preload-837474 event: Registered Node no-preload-837474 in Controller
	  Normal  NodeReady                96s                kubelet          Node no-preload-837474 status is now: NodeReady
	  Normal  Starting                 60s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  60s (x8 over 60s)  kubelet          Node no-preload-837474 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    60s (x8 over 60s)  kubelet          Node no-preload-837474 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     60s (x8 over 60s)  kubelet          Node no-preload-837474 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           53s                node-controller  Node no-preload-837474 event: Registered Node no-preload-837474 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: c6 2a 10 38 98 ce 7e af 25 93 50 8b 08 00
	[Nov19 02:40] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 69 19 13 d2 34 08 06
	[  +0.000303] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff de 82 c7 57 ef 49 08 06
	[Nov19 02:41] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 82 b6 eb cd 7e a7 08 06
	[  +0.001170] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 6a 20 a4 3b 82 10 08 06
	[ +12.842438] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 1e 35 97 65 5e 2e 08 06
	[  +4.187285] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ba 20 d4 3c 25 0c 08 06
	[ +19.742639] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6e e8 d1 08 45 d2 08 06
	[  +0.000327] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ba 20 d4 3c 25 0c 08 06
	[Nov19 02:42] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 2b 58 8a 05 dc 08 06
	[  +0.000340] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 b6 eb cd 7e a7 08 06
	[ +10.661146] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b6 1d bb 8d c6 48 08 06
	[  +0.000382] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 1e 35 97 65 5e 2e 08 06
	
	
	==> etcd [70ad4cd08b245fb372615d7c559ce529ef762f5d44fc541f9bc7000ebd69b651] <==
	{"level":"warn","ts":"2025-11-19T02:44:05.500268Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:05.503052Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:05.518851Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:05.529456Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:05.539108Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:05.563540Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:05.584977Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:05.597574Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53988","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:05.614506Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:05.624679Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:05.639294Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:05.650948Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:05.662631Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:05.686766Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:05.698356Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:05.716599Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:05.738181Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:05.745713Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:05.760993Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:05.776756Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:05.786491Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:05.879938Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:07.701680Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"133.743919ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/admin\" limit:1 ","response":"range_response_count:1 size:3756"}
	{"level":"info","ts":"2025-11-19T02:44:07.701784Z","caller":"traceutil/trace.go:172","msg":"trace[929684817] range","detail":"{range_begin:/registry/clusterroles/admin; range_end:; response_count:1; response_revision:539; }","duration":"133.876931ms","start":"2025-11-19T02:44:07.567888Z","end":"2025-11-19T02:44:07.701765Z","steps":["trace[929684817] 'agreement among raft nodes before linearized reading'  (duration: 45.740727ms)","trace[929684817] 'range keys from in-memory index tree'  (duration: 87.881734ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-19T02:44:07.702052Z","caller":"traceutil/trace.go:172","msg":"trace[2077588863] transaction","detail":"{read_only:false; response_revision:540; number_of_response:1; }","duration":"138.301292ms","start":"2025-11-19T02:44:07.563722Z","end":"2025-11-19T02:44:07.702023Z","steps":["trace[2077588863] 'process raft request'  (duration: 49.973155ms)","trace[2077588863] 'compare'  (duration: 87.80872ms)"],"step_count":2}
	
	
	==> kernel <==
	 02:45:02 up  1:27,  0 user,  load average: 3.18, 3.28, 2.29
	Linux no-preload-837474 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [bedbafff131c81000054173f696af6ba2d4a9350b8cbdd2985d33561fa58f639] <==
	I1119 02:44:08.147019       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 02:44:08.147832       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1119 02:44:08.148157       1 main.go:148] setting mtu 1500 for CNI 
	I1119 02:44:08.148311       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 02:44:08.148386       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T02:44:08Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 02:44:08.448836       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 02:44:08.449061       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 02:44:08.449085       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 02:44:08.449894       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1119 02:44:08.825484       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1119 02:44:08.825522       1 metrics.go:72] Registering metrics
	I1119 02:44:08.825681       1 controller.go:711] "Syncing nftables rules"
	I1119 02:44:18.447769       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1119 02:44:18.447836       1 main.go:301] handling current node
	I1119 02:44:28.447790       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1119 02:44:28.447839       1 main.go:301] handling current node
	I1119 02:44:38.447664       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1119 02:44:38.447709       1 main.go:301] handling current node
	I1119 02:44:48.446959       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1119 02:44:48.446993       1 main.go:301] handling current node
	I1119 02:44:58.449606       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1119 02:44:58.449640       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e25eec2afaa5d216ff068aae46bf36572a21229c3f7eba57128ac16e1b16a13a] <==
	I1119 02:44:06.616768       1 autoregister_controller.go:144] Starting autoregister controller
	I1119 02:44:06.616793       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1119 02:44:06.616830       1 cache.go:39] Caches are synced for autoregister controller
	I1119 02:44:06.616959       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1119 02:44:06.617081       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1119 02:44:06.617143       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1119 02:44:06.617228       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1119 02:44:06.619862       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	E1119 02:44:06.632953       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1119 02:44:06.639421       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 02:44:06.644150       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1119 02:44:06.654621       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1119 02:44:06.654899       1 policy_source.go:240] refreshing policies
	I1119 02:44:06.659388       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 02:44:07.038687       1 controller.go:667] quota admission added evaluator for: namespaces
	I1119 02:44:07.123418       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1119 02:44:07.195970       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1119 02:44:07.217393       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1119 02:44:07.234621       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1119 02:44:07.297301       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.51.178"}
	I1119 02:44:07.309724       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.69.115"}
	I1119 02:44:07.515811       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 02:44:09.936560       1 controller.go:667] quota admission added evaluator for: endpoints
	I1119 02:44:10.332960       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1119 02:44:10.434319       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [348a7baf54addbf4a9c81030950fa886111d02619363237a83c83efe031b6e4e] <==
	I1119 02:44:09.929539       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1119 02:44:09.929574       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1119 02:44:09.929572       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1119 02:44:09.929598       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1119 02:44:09.929645       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1119 02:44:09.929674       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 02:44:09.929686       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1119 02:44:09.929692       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1119 02:44:09.929722       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1119 02:44:09.931124       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1119 02:44:09.931373       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1119 02:44:09.931460       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1119 02:44:09.932443       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1119 02:44:09.933505       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1119 02:44:09.935646       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1119 02:44:09.936895       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1119 02:44:09.936945       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 02:44:09.938142       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1119 02:44:09.951587       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1119 02:44:09.951672       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1119 02:44:09.951719       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1119 02:44:09.951732       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1119 02:44:09.951739       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1119 02:44:09.954797       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1119 02:44:09.968174       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [473b9088f9191f71c546e80811be04f683a56318e61de78d2d9edc9173aac7de] <==
	I1119 02:44:07.958532       1 server_linux.go:53] "Using iptables proxy"
	I1119 02:44:08.043462       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 02:44:08.144358       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 02:44:08.144708       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1119 02:44:08.144888       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 02:44:08.170662       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 02:44:08.170738       1 server_linux.go:132] "Using iptables Proxier"
	I1119 02:44:08.177602       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 02:44:08.178152       1 server.go:527] "Version info" version="v1.34.1"
	I1119 02:44:08.178207       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 02:44:08.179822       1 config.go:309] "Starting node config controller"
	I1119 02:44:08.179872       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 02:44:08.179882       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 02:44:08.180055       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 02:44:08.180061       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 02:44:08.180099       1 config.go:200] "Starting service config controller"
	I1119 02:44:08.180105       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 02:44:08.180190       1 config.go:106] "Starting endpoint slice config controller"
	I1119 02:44:08.180194       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 02:44:08.280629       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1119 02:44:08.280705       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1119 02:44:08.280656       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [6dd757954ee069960d7775b0cb8053165f8ed7b87e78e24e092a5d8d6ad8c518] <==
	I1119 02:44:04.557347       1 serving.go:386] Generated self-signed cert in-memory
	W1119 02:44:06.571743       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1119 02:44:06.571893       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	W1119 02:44:06.572059       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1119 02:44:06.572076       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1119 02:44:06.611406       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1119 02:44:06.611451       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 02:44:06.617857       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 02:44:06.617897       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 02:44:06.618687       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1119 02:44:06.618781       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1119 02:44:06.718701       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 19 02:44:10 no-preload-837474 kubelet[714]: I1119 02:44:10.673513     714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/875975d2-e3f6-411f-88f9-8c4fa8628e09-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-8rhqr\" (UID: \"875975d2-e3f6-411f-88f9-8c4fa8628e09\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-8rhqr"
	Nov 19 02:44:16 no-preload-837474 kubelet[714]: I1119 02:44:16.387505     714 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 19 02:44:17 no-preload-837474 kubelet[714]: I1119 02:44:17.144303     714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-8rhqr" podStartSLOduration=1.498470998 podStartE2EDuration="7.144280539s" podCreationTimestamp="2025-11-19 02:44:10 +0000 UTC" firstStartedPulling="2025-11-19 02:44:10.887388642 +0000 UTC m=+8.527563668" lastFinishedPulling="2025-11-19 02:44:16.533198163 +0000 UTC m=+14.173373209" observedRunningTime="2025-11-19 02:44:16.638344621 +0000 UTC m=+14.278519660" watchObservedRunningTime="2025-11-19 02:44:17.144280539 +0000 UTC m=+14.784455591"
	Nov 19 02:44:19 no-preload-837474 kubelet[714]: I1119 02:44:19.635642     714 scope.go:117] "RemoveContainer" containerID="c3ef147f11e7853a0b7f0c7e61291aae54cafa7f5fc041da0163c1e18ccc5a18"
	Nov 19 02:44:20 no-preload-837474 kubelet[714]: I1119 02:44:20.640742     714 scope.go:117] "RemoveContainer" containerID="c3ef147f11e7853a0b7f0c7e61291aae54cafa7f5fc041da0163c1e18ccc5a18"
	Nov 19 02:44:20 no-preload-837474 kubelet[714]: I1119 02:44:20.640914     714 scope.go:117] "RemoveContainer" containerID="7fa55dd64ea5ea85dfe50bc0e21f0b8c225d3fc962df394ae297ab993dff0e45"
	Nov 19 02:44:20 no-preload-837474 kubelet[714]: E1119 02:44:20.641086     714 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-z5mvf_kubernetes-dashboard(904fbfe7-d798-4c74-914d-c2de2c4a3d83)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-z5mvf" podUID="904fbfe7-d798-4c74-914d-c2de2c4a3d83"
	Nov 19 02:44:21 no-preload-837474 kubelet[714]: I1119 02:44:21.645248     714 scope.go:117] "RemoveContainer" containerID="7fa55dd64ea5ea85dfe50bc0e21f0b8c225d3fc962df394ae297ab993dff0e45"
	Nov 19 02:44:21 no-preload-837474 kubelet[714]: E1119 02:44:21.645456     714 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-z5mvf_kubernetes-dashboard(904fbfe7-d798-4c74-914d-c2de2c4a3d83)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-z5mvf" podUID="904fbfe7-d798-4c74-914d-c2de2c4a3d83"
	Nov 19 02:44:24 no-preload-837474 kubelet[714]: I1119 02:44:24.287264     714 scope.go:117] "RemoveContainer" containerID="7fa55dd64ea5ea85dfe50bc0e21f0b8c225d3fc962df394ae297ab993dff0e45"
	Nov 19 02:44:24 no-preload-837474 kubelet[714]: E1119 02:44:24.287532     714 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-z5mvf_kubernetes-dashboard(904fbfe7-d798-4c74-914d-c2de2c4a3d83)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-z5mvf" podUID="904fbfe7-d798-4c74-914d-c2de2c4a3d83"
	Nov 19 02:44:37 no-preload-837474 kubelet[714]: I1119 02:44:37.532730     714 scope.go:117] "RemoveContainer" containerID="7fa55dd64ea5ea85dfe50bc0e21f0b8c225d3fc962df394ae297ab993dff0e45"
	Nov 19 02:44:37 no-preload-837474 kubelet[714]: I1119 02:44:37.685129     714 scope.go:117] "RemoveContainer" containerID="7fa55dd64ea5ea85dfe50bc0e21f0b8c225d3fc962df394ae297ab993dff0e45"
	Nov 19 02:44:37 no-preload-837474 kubelet[714]: I1119 02:44:37.685361     714 scope.go:117] "RemoveContainer" containerID="423bb4796dfff9afc99af43831f4867bd0432e350c892931fcb4c9a9f59f6ae0"
	Nov 19 02:44:37 no-preload-837474 kubelet[714]: E1119 02:44:37.685581     714 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-z5mvf_kubernetes-dashboard(904fbfe7-d798-4c74-914d-c2de2c4a3d83)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-z5mvf" podUID="904fbfe7-d798-4c74-914d-c2de2c4a3d83"
	Nov 19 02:44:38 no-preload-837474 kubelet[714]: I1119 02:44:38.691629     714 scope.go:117] "RemoveContainer" containerID="07e1d6dcc863e8f7d82680f4ba851550938cc3f8b8fb1b8080aac242af438836"
	Nov 19 02:44:44 no-preload-837474 kubelet[714]: I1119 02:44:44.287633     714 scope.go:117] "RemoveContainer" containerID="423bb4796dfff9afc99af43831f4867bd0432e350c892931fcb4c9a9f59f6ae0"
	Nov 19 02:44:44 no-preload-837474 kubelet[714]: E1119 02:44:44.287837     714 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-z5mvf_kubernetes-dashboard(904fbfe7-d798-4c74-914d-c2de2c4a3d83)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-z5mvf" podUID="904fbfe7-d798-4c74-914d-c2de2c4a3d83"
	Nov 19 02:44:56 no-preload-837474 kubelet[714]: I1119 02:44:56.532927     714 scope.go:117] "RemoveContainer" containerID="423bb4796dfff9afc99af43831f4867bd0432e350c892931fcb4c9a9f59f6ae0"
	Nov 19 02:44:56 no-preload-837474 kubelet[714]: E1119 02:44:56.533161     714 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-z5mvf_kubernetes-dashboard(904fbfe7-d798-4c74-914d-c2de2c4a3d83)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-z5mvf" podUID="904fbfe7-d798-4c74-914d-c2de2c4a3d83"
	Nov 19 02:45:00 no-preload-837474 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 19 02:45:00 no-preload-837474 kubelet[714]: I1119 02:45:00.037542     714 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 19 02:45:00 no-preload-837474 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 19 02:45:00 no-preload-837474 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 19 02:45:00 no-preload-837474 systemd[1]: kubelet.service: Consumed 1.740s CPU time.
	
	
	==> kubernetes-dashboard [bf02d90b6bbfa6d0799547f56beede01425bff1af868efbb4db1bed287e9ed6a] <==
	2025/11/19 02:44:16 Using namespace: kubernetes-dashboard
	2025/11/19 02:44:16 Using in-cluster config to connect to apiserver
	2025/11/19 02:44:16 Using secret token for csrf signing
	2025/11/19 02:44:16 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/19 02:44:16 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/19 02:44:16 Successful initial request to the apiserver, version: v1.34.1
	2025/11/19 02:44:16 Generating JWE encryption key
	2025/11/19 02:44:16 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/19 02:44:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/19 02:44:16 Initializing JWE encryption key from synchronized object
	2025/11/19 02:44:16 Creating in-cluster Sidecar client
	2025/11/19 02:44:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/19 02:44:16 Serving insecurely on HTTP port: 9090
	2025/11/19 02:44:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/19 02:44:16 Starting overwatch
	
	
	==> storage-provisioner [07e1d6dcc863e8f7d82680f4ba851550938cc3f8b8fb1b8080aac242af438836] <==
	I1119 02:44:07.915043       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1119 02:44:37.917869       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [13ad6da0bd5e63f9266596b090ebf10732740126bcb0b39f60bde9195f6d11fd] <==
	I1119 02:44:38.740855       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1119 02:44:38.747641       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1119 02:44:38.747674       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1119 02:44:38.749391       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:44:42.204530       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:44:46.464619       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:44:50.063541       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:44:53.116887       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:44:56.140531       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:44:56.146107       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 02:44:56.146543       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1119 02:44:56.146779       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-837474_0537c0f0-3c0d-4766-a91a-6d618809c134!
	I1119 02:44:56.147190       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5860216f-7052-4908-a51f-f754ee84ec87", APIVersion:"v1", ResourceVersion:"671", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-837474_0537c0f0-3c0d-4766-a91a-6d618809c134 became leader
	W1119 02:44:56.149344       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:44:56.155872       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 02:44:56.247811       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-837474_0537c0f0-3c0d-4766-a91a-6d618809c134!
	W1119 02:44:58.159408       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:44:58.165011       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:45:00.167663       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:45:00.179263       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:45:02.182568       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:45:02.186176       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-837474 -n no-preload-837474
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-837474 -n no-preload-837474: exit status 2 (308.130695ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-837474 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-837474
helpers_test.go:243: (dbg) docker inspect no-preload-837474:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "778842a2abfda801d58e4b01b6627acceed2d2c8fd6cafbd93f5fa99edd1118b",
	        "Created": "2025-11-19T02:42:31.131345889Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 323033,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T02:43:55.670058111Z",
	            "FinishedAt": "2025-11-19T02:43:54.700318358Z"
	        },
	        "Image": "sha256:a368e3d71517ce17114afb6c9921965419df972dd0e2d32a9973a8946f0910a3",
	        "ResolvConfPath": "/var/lib/docker/containers/778842a2abfda801d58e4b01b6627acceed2d2c8fd6cafbd93f5fa99edd1118b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/778842a2abfda801d58e4b01b6627acceed2d2c8fd6cafbd93f5fa99edd1118b/hostname",
	        "HostsPath": "/var/lib/docker/containers/778842a2abfda801d58e4b01b6627acceed2d2c8fd6cafbd93f5fa99edd1118b/hosts",
	        "LogPath": "/var/lib/docker/containers/778842a2abfda801d58e4b01b6627acceed2d2c8fd6cafbd93f5fa99edd1118b/778842a2abfda801d58e4b01b6627acceed2d2c8fd6cafbd93f5fa99edd1118b-json.log",
	        "Name": "/no-preload-837474",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-837474:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-837474",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "778842a2abfda801d58e4b01b6627acceed2d2c8fd6cafbd93f5fa99edd1118b",
	                "LowerDir": "/var/lib/docker/overlay2/ff0558b4fd157e4fb015cbb400d3d61ed321012cbc9a2d31ec55e90dd718f480-init/diff:/var/lib/docker/overlay2/10dcbd04dd53463a9d8dc947ef22df9efa1b3a4d07707317c3869fa39d5b22a7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ff0558b4fd157e4fb015cbb400d3d61ed321012cbc9a2d31ec55e90dd718f480/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ff0558b4fd157e4fb015cbb400d3d61ed321012cbc9a2d31ec55e90dd718f480/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ff0558b4fd157e4fb015cbb400d3d61ed321012cbc9a2d31ec55e90dd718f480/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-837474",
	                "Source": "/var/lib/docker/volumes/no-preload-837474/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-837474",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-837474",
	                "name.minikube.sigs.k8s.io": "no-preload-837474",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "d5cdce2bfca5eaa4d18d5bf05e41359f29d3bbcb425c276f1a13770ca5195d3d",
	            "SandboxKey": "/var/run/docker/netns/d5cdce2bfca5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33123"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33124"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33127"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33125"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33126"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-837474": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d5cdcd8e70bb42a339d48cf84add8542874b20a7d1d91c5ad0bc5b1415ad92cb",
	                    "EndpointID": "fe08729209473e900b34bde82176d683056286b90f22233987d877d4ddd6a895",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "e6:71:cc:96:ab:bd",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-837474",
	                        "778842a2abfd"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-837474 -n no-preload-837474
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-837474 -n no-preload-837474: exit status 2 (299.209711ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-837474 logs -n 25
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ stop    │ -p no-preload-837474 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-837474            │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:43 UTC │
	│ addons  │ enable dashboard -p embed-certs-811173 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-811173           │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:43 UTC │
	│ start   │ -p embed-certs-811173 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-811173           │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:44 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-167150 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-167150 │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:43 UTC │
	│ start   │ -p default-k8s-diff-port-167150 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-167150 │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:44 UTC │
	│ addons  │ enable dashboard -p no-preload-837474 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-837474            │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:43 UTC │
	│ start   │ -p no-preload-837474 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-837474            │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:44 UTC │
	│ image   │ old-k8s-version-987573 image list --format=json                                                                                                                                                                                               │ old-k8s-version-987573       │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │ 19 Nov 25 02:44 UTC │
	│ pause   │ -p old-k8s-version-987573 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-987573       │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │                     │
	│ delete  │ -p old-k8s-version-987573                                                                                                                                                                                                                     │ old-k8s-version-987573       │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │ 19 Nov 25 02:44 UTC │
	│ delete  │ -p old-k8s-version-987573                                                                                                                                                                                                                     │ old-k8s-version-987573       │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │ 19 Nov 25 02:44 UTC │
	│ start   │ -p newest-cni-956139 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-956139            │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │ 19 Nov 25 02:44 UTC │
	│ image   │ embed-certs-811173 image list --format=json                                                                                                                                                                                                   │ embed-certs-811173           │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │ 19 Nov 25 02:44 UTC │
	│ pause   │ -p embed-certs-811173 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-811173           │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │                     │
	│ delete  │ -p embed-certs-811173                                                                                                                                                                                                                         │ embed-certs-811173           │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │ 19 Nov 25 02:44 UTC │
	│ image   │ default-k8s-diff-port-167150 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-167150 │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │ 19 Nov 25 02:44 UTC │
	│ pause   │ -p default-k8s-diff-port-167150 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-167150 │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-956139 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-956139            │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │                     │
	│ delete  │ -p embed-certs-811173                                                                                                                                                                                                                         │ embed-certs-811173           │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │ 19 Nov 25 02:44 UTC │
	│ stop    │ -p newest-cni-956139 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-956139            │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │ 19 Nov 25 02:45 UTC │
	│ image   │ no-preload-837474 image list --format=json                                                                                                                                                                                                    │ no-preload-837474            │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │ 19 Nov 25 02:44 UTC │
	│ pause   │ -p no-preload-837474 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-837474            │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │                     │
	│ addons  │ enable dashboard -p newest-cni-956139 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-956139            │ jenkins │ v1.37.0 │ 19 Nov 25 02:45 UTC │ 19 Nov 25 02:45 UTC │
	│ start   │ -p newest-cni-956139 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-956139            │ jenkins │ v1.37.0 │ 19 Nov 25 02:45 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-167150                                                                                                                                                                                                               │ default-k8s-diff-port-167150 │ jenkins │ v1.37.0 │ 19 Nov 25 02:45 UTC │ 19 Nov 25 02:45 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 02:45:01
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 02:45:01.079633  339411 out.go:360] Setting OutFile to fd 1 ...
	I1119 02:45:01.079896  339411 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:45:01.079905  339411 out.go:374] Setting ErrFile to fd 2...
	I1119 02:45:01.079910  339411 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:45:01.080082  339411 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-11126/.minikube/bin
	I1119 02:45:01.080519  339411 out.go:368] Setting JSON to false
	I1119 02:45:01.081542  339411 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5248,"bootTime":1763515053,"procs":261,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 02:45:01.081600  339411 start.go:143] virtualization: kvm guest
	I1119 02:45:01.083571  339411 out.go:179] * [newest-cni-956139] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1119 02:45:01.087840  339411 notify.go:221] Checking for updates...
	I1119 02:45:01.089172  339411 out.go:179]   - MINIKUBE_LOCATION=21924
	I1119 02:45:01.090249  339411 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 02:45:01.091223  339411 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21924-11126/kubeconfig
	I1119 02:45:01.092319  339411 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-11126/.minikube
	I1119 02:45:01.093387  339411 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1119 02:45:01.094361  339411 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 02:45:01.095805  339411 config.go:182] Loaded profile config "newest-cni-956139": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:45:01.096303  339411 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 02:45:01.118824  339411 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1119 02:45:01.118911  339411 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 02:45:01.181407  339411 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:false NGoroutines:68 SystemTime:2025-11-19 02:45:01.171333163 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652060160 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 02:45:01.181546  339411 docker.go:319] overlay module found
	I1119 02:45:01.183364  339411 out.go:179] * Using the docker driver based on existing profile
	I1119 02:45:01.184618  339411 start.go:309] selected driver: docker
	I1119 02:45:01.184633  339411 start.go:930] validating driver "docker" against &{Name:newest-cni-956139 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-956139 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:45:01.184704  339411 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 02:45:01.185227  339411 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 02:45:01.247035  339411 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:43 OomKillDisable:false NGoroutines:62 SystemTime:2025-11-19 02:45:01.237379667 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652060160 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 02:45:01.247406  339411 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1119 02:45:01.247459  339411 cni.go:84] Creating CNI manager for ""
	I1119 02:45:01.247540  339411 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 02:45:01.247593  339411 start.go:353] cluster config:
	{Name:newest-cni-956139 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-956139 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:45:01.250071  339411 out.go:179] * Starting "newest-cni-956139" primary control-plane node in "newest-cni-956139" cluster
	I1119 02:45:01.251167  339411 cache.go:134] Beginning downloading kic base image for docker with crio
	I1119 02:45:01.252290  339411 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1119 02:45:01.253285  339411 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 02:45:01.253315  339411 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21924-11126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1119 02:45:01.253324  339411 cache.go:65] Caching tarball of preloaded images
	I1119 02:45:01.253382  339411 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1119 02:45:01.253423  339411 preload.go:238] Found /home/jenkins/minikube-integration/21924-11126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1119 02:45:01.253478  339411 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1119 02:45:01.253611  339411 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/config.json ...
	I1119 02:45:01.272884  339411 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1119 02:45:01.272903  339411 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1119 02:45:01.272916  339411 cache.go:243] Successfully downloaded all kic artifacts
	I1119 02:45:01.272936  339411 start.go:360] acquireMachinesLock for newest-cni-956139: {Name:mk15a132b2574a22e8a886ba5601ed901f63d00c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 02:45:01.272994  339411 start.go:364] duration metric: took 34.51µs to acquireMachinesLock for "newest-cni-956139"
	I1119 02:45:01.273009  339411 start.go:96] Skipping create...Using existing machine configuration
	I1119 02:45:01.273016  339411 fix.go:54] fixHost starting: 
	I1119 02:45:01.273196  339411 cli_runner.go:164] Run: docker container inspect newest-cni-956139 --format={{.State.Status}}
	I1119 02:45:01.289647  339411 fix.go:112] recreateIfNeeded on newest-cni-956139: state=Stopped err=<nil>
	W1119 02:45:01.289674  339411 fix.go:138] unexpected machine state, will restart: <nil>
	
	
	==> CRI-O <==
	Nov 19 02:44:19 no-preload-837474 crio[568]: time="2025-11-19T02:44:19.689651224Z" level=info msg="Started container" PID=1730 containerID=7fa55dd64ea5ea85dfe50bc0e21f0b8c225d3fc962df394ae297ab993dff0e45 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-z5mvf/dashboard-metrics-scraper id=e70dc06f-2a61-4363-bd54-f5ec0252e802 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a1fbd6af4990cba7a23fa0f427e7505ab0f325db17d9d024c37095222ed79756
	Nov 19 02:44:20 no-preload-837474 crio[568]: time="2025-11-19T02:44:20.642156467Z" level=info msg="Removing container: c3ef147f11e7853a0b7f0c7e61291aae54cafa7f5fc041da0163c1e18ccc5a18" id=97c3ab62-2f8c-41c7-97e7-5bb211c21486 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 19 02:44:20 no-preload-837474 crio[568]: time="2025-11-19T02:44:20.652117293Z" level=info msg="Removed container c3ef147f11e7853a0b7f0c7e61291aae54cafa7f5fc041da0163c1e18ccc5a18: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-z5mvf/dashboard-metrics-scraper" id=97c3ab62-2f8c-41c7-97e7-5bb211c21486 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 19 02:44:37 no-preload-837474 crio[568]: time="2025-11-19T02:44:37.533232276Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=2a99776e-5229-46af-a398-89ad519417ff name=/runtime.v1.ImageService/ImageStatus
	Nov 19 02:44:37 no-preload-837474 crio[568]: time="2025-11-19T02:44:37.53408035Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=d8d5e53e-3fb0-4569-ab1b-04a28d9d8e6a name=/runtime.v1.ImageService/ImageStatus
	Nov 19 02:44:37 no-preload-837474 crio[568]: time="2025-11-19T02:44:37.535132827Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-z5mvf/dashboard-metrics-scraper" id=5b414989-f41f-4ffc-a908-11da5a61c046 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 02:44:37 no-preload-837474 crio[568]: time="2025-11-19T02:44:37.535265877Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:44:37 no-preload-837474 crio[568]: time="2025-11-19T02:44:37.540771886Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:44:37 no-preload-837474 crio[568]: time="2025-11-19T02:44:37.541183834Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:44:37 no-preload-837474 crio[568]: time="2025-11-19T02:44:37.570913034Z" level=info msg="Created container 423bb4796dfff9afc99af43831f4867bd0432e350c892931fcb4c9a9f59f6ae0: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-z5mvf/dashboard-metrics-scraper" id=5b414989-f41f-4ffc-a908-11da5a61c046 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 02:44:37 no-preload-837474 crio[568]: time="2025-11-19T02:44:37.571467201Z" level=info msg="Starting container: 423bb4796dfff9afc99af43831f4867bd0432e350c892931fcb4c9a9f59f6ae0" id=c0e29cf9-68c5-4ba8-9a19-2c850baad246 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 02:44:37 no-preload-837474 crio[568]: time="2025-11-19T02:44:37.57302002Z" level=info msg="Started container" PID=1740 containerID=423bb4796dfff9afc99af43831f4867bd0432e350c892931fcb4c9a9f59f6ae0 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-z5mvf/dashboard-metrics-scraper id=c0e29cf9-68c5-4ba8-9a19-2c850baad246 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a1fbd6af4990cba7a23fa0f427e7505ab0f325db17d9d024c37095222ed79756
	Nov 19 02:44:37 no-preload-837474 crio[568]: time="2025-11-19T02:44:37.686407093Z" level=info msg="Removing container: 7fa55dd64ea5ea85dfe50bc0e21f0b8c225d3fc962df394ae297ab993dff0e45" id=93021a0b-73b2-4429-83d5-1202c3387dd1 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 19 02:44:37 no-preload-837474 crio[568]: time="2025-11-19T02:44:37.695374042Z" level=info msg="Removed container 7fa55dd64ea5ea85dfe50bc0e21f0b8c225d3fc962df394ae297ab993dff0e45: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-z5mvf/dashboard-metrics-scraper" id=93021a0b-73b2-4429-83d5-1202c3387dd1 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 19 02:44:38 no-preload-837474 crio[568]: time="2025-11-19T02:44:38.69195963Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=7e6f0dde-0ac7-40d3-bc50-7c397a000ada name=/runtime.v1.ImageService/ImageStatus
	Nov 19 02:44:38 no-preload-837474 crio[568]: time="2025-11-19T02:44:38.69278016Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=730cd505-5374-4851-a685-83e2e3ca3d4d name=/runtime.v1.ImageService/ImageStatus
	Nov 19 02:44:38 no-preload-837474 crio[568]: time="2025-11-19T02:44:38.693736382Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=e7926ecb-1bb8-4264-ad7a-3cf77dc5f639 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 02:44:38 no-preload-837474 crio[568]: time="2025-11-19T02:44:38.69386466Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:44:38 no-preload-837474 crio[568]: time="2025-11-19T02:44:38.699285164Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:44:38 no-preload-837474 crio[568]: time="2025-11-19T02:44:38.699477582Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/df584b1c0279f560e5ef7b312a2e22c8f61379c0af1d028629850b198d60b92c/merged/etc/passwd: no such file or directory"
	Nov 19 02:44:38 no-preload-837474 crio[568]: time="2025-11-19T02:44:38.699512153Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/df584b1c0279f560e5ef7b312a2e22c8f61379c0af1d028629850b198d60b92c/merged/etc/group: no such file or directory"
	Nov 19 02:44:38 no-preload-837474 crio[568]: time="2025-11-19T02:44:38.699787659Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:44:38 no-preload-837474 crio[568]: time="2025-11-19T02:44:38.726895503Z" level=info msg="Created container 13ad6da0bd5e63f9266596b090ebf10732740126bcb0b39f60bde9195f6d11fd: kube-system/storage-provisioner/storage-provisioner" id=e7926ecb-1bb8-4264-ad7a-3cf77dc5f639 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 02:44:38 no-preload-837474 crio[568]: time="2025-11-19T02:44:38.727397931Z" level=info msg="Starting container: 13ad6da0bd5e63f9266596b090ebf10732740126bcb0b39f60bde9195f6d11fd" id=94b0121f-6b39-4660-b80c-72e3dd483f47 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 02:44:38 no-preload-837474 crio[568]: time="2025-11-19T02:44:38.729108664Z" level=info msg="Started container" PID=1754 containerID=13ad6da0bd5e63f9266596b090ebf10732740126bcb0b39f60bde9195f6d11fd description=kube-system/storage-provisioner/storage-provisioner id=94b0121f-6b39-4660-b80c-72e3dd483f47 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3687ee4f5f54bfbe3155afb11f50aedcb931539161985a538ab1a0741ef03e36
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	13ad6da0bd5e6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           25 seconds ago       Running             storage-provisioner         1                   3687ee4f5f54b       storage-provisioner                          kube-system
	423bb4796dfff       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           26 seconds ago       Exited              dashboard-metrics-scraper   2                   a1fbd6af4990c       dashboard-metrics-scraper-6ffb444bf9-z5mvf   kubernetes-dashboard
	bf02d90b6bbfa       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   47 seconds ago       Running             kubernetes-dashboard        0                   4e8ab732c7bb3       kubernetes-dashboard-855c9754f9-8rhqr        kubernetes-dashboard
	b65de636abece       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           56 seconds ago       Running             busybox                     1                   2476591a6d642       busybox                                      default
	0ece14c0d2989       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           56 seconds ago       Running             coredns                     0                   e1d7c42b75d35       coredns-66bc5c9577-44bdr                     kube-system
	bedbafff131c8       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           56 seconds ago       Running             kindnet-cni                 0                   9e2e8b8986e11       kindnet-96d7l                                kube-system
	07e1d6dcc863e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           56 seconds ago       Exited              storage-provisioner         0                   3687ee4f5f54b       storage-provisioner                          kube-system
	473b9088f9191       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           56 seconds ago       Running             kube-proxy                  0                   d4dd6c8cef036       kube-proxy-hmxzk                             kube-system
	6dd757954ee06       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           About a minute ago   Running             kube-scheduler              0                   773a1e1ff0c2f       kube-scheduler-no-preload-837474             kube-system
	348a7baf54add       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           About a minute ago   Running             kube-controller-manager     0                   2ec1603fe8b50       kube-controller-manager-no-preload-837474    kube-system
	e25eec2afaa5d       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           About a minute ago   Running             kube-apiserver              0                   498ebfb2cb989       kube-apiserver-no-preload-837474             kube-system
	70ad4cd08b245       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           About a minute ago   Running             etcd                        0                   b19c52d4b8a84       etcd-no-preload-837474                       kube-system
	
	
	==> coredns [0ece14c0d2989fd84a8306761a16f42605844ab5403451efb55760cdf31a20a0] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35863 - 34326 "HINFO IN 419189069344077813.6948578736365676640. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.451018566s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-837474
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-837474
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ebd49efe8b7d44f89fc96e21beffcd64dfee8277
	                    minikube.k8s.io/name=no-preload-837474
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T02_43_08_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 02:43:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-837474
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 02:44:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 02:44:47 +0000   Wed, 19 Nov 2025 02:43:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 02:44:47 +0000   Wed, 19 Nov 2025 02:43:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 02:44:47 +0000   Wed, 19 Nov 2025 02:43:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 02:44:47 +0000   Wed, 19 Nov 2025 02:43:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    no-preload-837474
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863340Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863340Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf10fb2f940d419c1d138723691cfee8
	  System UUID:                1196f62d-ee96-4bda-889c-0da66532b529
	  Boot ID:                    b74e7f3d-91af-4c1b-82f6-1745dfd2e678
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-66bc5c9577-44bdr                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     111s
	  kube-system                 etcd-no-preload-837474                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         117s
	  kube-system                 kindnet-96d7l                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      112s
	  kube-system                 kube-apiserver-no-preload-837474              250m (3%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-controller-manager-no-preload-837474     200m (2%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-proxy-hmxzk                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-scheduler-no-preload-837474              100m (1%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-z5mvf    0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-8rhqr         0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 110s                 kube-proxy       
	  Normal  Starting                 56s                  kube-proxy       
	  Normal  Starting                 2m2s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m2s (x8 over 2m2s)  kubelet          Node no-preload-837474 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m2s (x8 over 2m2s)  kubelet          Node no-preload-837474 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m2s (x8 over 2m2s)  kubelet          Node no-preload-837474 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    117s                 kubelet          Node no-preload-837474 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  117s                 kubelet          Node no-preload-837474 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     117s                 kubelet          Node no-preload-837474 status is now: NodeHasSufficientPID
	  Normal  Starting                 117s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           113s                 node-controller  Node no-preload-837474 event: Registered Node no-preload-837474 in Controller
	  Normal  NodeReady                98s                  kubelet          Node no-preload-837474 status is now: NodeReady
	  Normal  Starting                 62s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  62s (x8 over 62s)    kubelet          Node no-preload-837474 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    62s (x8 over 62s)    kubelet          Node no-preload-837474 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     62s (x8 over 62s)    kubelet          Node no-preload-837474 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           55s                  node-controller  Node no-preload-837474 event: Registered Node no-preload-837474 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: c6 2a 10 38 98 ce 7e af 25 93 50 8b 08 00
	[Nov19 02:40] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 69 19 13 d2 34 08 06
	[  +0.000303] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff de 82 c7 57 ef 49 08 06
	[Nov19 02:41] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 82 b6 eb cd 7e a7 08 06
	[  +0.001170] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 6a 20 a4 3b 82 10 08 06
	[ +12.842438] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 1e 35 97 65 5e 2e 08 06
	[  +4.187285] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ba 20 d4 3c 25 0c 08 06
	[ +19.742639] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6e e8 d1 08 45 d2 08 06
	[  +0.000327] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ba 20 d4 3c 25 0c 08 06
	[Nov19 02:42] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 2b 58 8a 05 dc 08 06
	[  +0.000340] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 b6 eb cd 7e a7 08 06
	[ +10.661146] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b6 1d bb 8d c6 48 08 06
	[  +0.000382] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 1e 35 97 65 5e 2e 08 06
	
	
	==> etcd [70ad4cd08b245fb372615d7c559ce529ef762f5d44fc541f9bc7000ebd69b651] <==
	{"level":"warn","ts":"2025-11-19T02:44:05.500268Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:05.503052Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:05.518851Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:05.529456Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:05.539108Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:05.563540Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:05.584977Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:05.597574Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53988","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:05.614506Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:05.624679Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:05.639294Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:05.650948Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:05.662631Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:05.686766Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:05.698356Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:05.716599Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:05.738181Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:05.745713Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:05.760993Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:05.776756Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:05.786491Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:05.879938Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:44:07.701680Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"133.743919ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/admin\" limit:1 ","response":"range_response_count:1 size:3756"}
	{"level":"info","ts":"2025-11-19T02:44:07.701784Z","caller":"traceutil/trace.go:172","msg":"trace[929684817] range","detail":"{range_begin:/registry/clusterroles/admin; range_end:; response_count:1; response_revision:539; }","duration":"133.876931ms","start":"2025-11-19T02:44:07.567888Z","end":"2025-11-19T02:44:07.701765Z","steps":["trace[929684817] 'agreement among raft nodes before linearized reading'  (duration: 45.740727ms)","trace[929684817] 'range keys from in-memory index tree'  (duration: 87.881734ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-19T02:44:07.702052Z","caller":"traceutil/trace.go:172","msg":"trace[2077588863] transaction","detail":"{read_only:false; response_revision:540; number_of_response:1; }","duration":"138.301292ms","start":"2025-11-19T02:44:07.563722Z","end":"2025-11-19T02:44:07.702023Z","steps":["trace[2077588863] 'process raft request'  (duration: 49.973155ms)","trace[2077588863] 'compare'  (duration: 87.80872ms)"],"step_count":2}
	
	
	==> kernel <==
	 02:45:04 up  1:27,  0 user,  load average: 3.18, 3.28, 2.29
	Linux no-preload-837474 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [bedbafff131c81000054173f696af6ba2d4a9350b8cbdd2985d33561fa58f639] <==
	I1119 02:44:08.147019       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 02:44:08.147832       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1119 02:44:08.148157       1 main.go:148] setting mtu 1500 for CNI 
	I1119 02:44:08.148311       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 02:44:08.148386       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T02:44:08Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 02:44:08.448836       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 02:44:08.449061       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 02:44:08.449085       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 02:44:08.449894       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1119 02:44:08.825484       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1119 02:44:08.825522       1 metrics.go:72] Registering metrics
	I1119 02:44:08.825681       1 controller.go:711] "Syncing nftables rules"
	I1119 02:44:18.447769       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1119 02:44:18.447836       1 main.go:301] handling current node
	I1119 02:44:28.447790       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1119 02:44:28.447839       1 main.go:301] handling current node
	I1119 02:44:38.447664       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1119 02:44:38.447709       1 main.go:301] handling current node
	I1119 02:44:48.446959       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1119 02:44:48.446993       1 main.go:301] handling current node
	I1119 02:44:58.449606       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1119 02:44:58.449640       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e25eec2afaa5d216ff068aae46bf36572a21229c3f7eba57128ac16e1b16a13a] <==
	I1119 02:44:06.616768       1 autoregister_controller.go:144] Starting autoregister controller
	I1119 02:44:06.616793       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1119 02:44:06.616830       1 cache.go:39] Caches are synced for autoregister controller
	I1119 02:44:06.616959       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1119 02:44:06.617081       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1119 02:44:06.617143       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1119 02:44:06.617228       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1119 02:44:06.619862       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	E1119 02:44:06.632953       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1119 02:44:06.639421       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 02:44:06.644150       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1119 02:44:06.654621       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1119 02:44:06.654899       1 policy_source.go:240] refreshing policies
	I1119 02:44:06.659388       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 02:44:07.038687       1 controller.go:667] quota admission added evaluator for: namespaces
	I1119 02:44:07.123418       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1119 02:44:07.195970       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1119 02:44:07.217393       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1119 02:44:07.234621       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1119 02:44:07.297301       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.51.178"}
	I1119 02:44:07.309724       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.69.115"}
	I1119 02:44:07.515811       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 02:44:09.936560       1 controller.go:667] quota admission added evaluator for: endpoints
	I1119 02:44:10.332960       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1119 02:44:10.434319       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [348a7baf54addbf4a9c81030950fa886111d02619363237a83c83efe031b6e4e] <==
	I1119 02:44:09.929539       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1119 02:44:09.929574       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1119 02:44:09.929572       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1119 02:44:09.929598       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1119 02:44:09.929645       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1119 02:44:09.929674       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 02:44:09.929686       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1119 02:44:09.929692       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1119 02:44:09.929722       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1119 02:44:09.931124       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1119 02:44:09.931373       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1119 02:44:09.931460       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1119 02:44:09.932443       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1119 02:44:09.933505       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1119 02:44:09.935646       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1119 02:44:09.936895       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1119 02:44:09.936945       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 02:44:09.938142       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1119 02:44:09.951587       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1119 02:44:09.951672       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1119 02:44:09.951719       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1119 02:44:09.951732       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1119 02:44:09.951739       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1119 02:44:09.954797       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1119 02:44:09.968174       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [473b9088f9191f71c546e80811be04f683a56318e61de78d2d9edc9173aac7de] <==
	I1119 02:44:07.958532       1 server_linux.go:53] "Using iptables proxy"
	I1119 02:44:08.043462       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 02:44:08.144358       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 02:44:08.144708       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1119 02:44:08.144888       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 02:44:08.170662       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 02:44:08.170738       1 server_linux.go:132] "Using iptables Proxier"
	I1119 02:44:08.177602       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 02:44:08.178152       1 server.go:527] "Version info" version="v1.34.1"
	I1119 02:44:08.178207       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 02:44:08.179822       1 config.go:309] "Starting node config controller"
	I1119 02:44:08.179872       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 02:44:08.179882       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 02:44:08.180055       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 02:44:08.180061       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 02:44:08.180099       1 config.go:200] "Starting service config controller"
	I1119 02:44:08.180105       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 02:44:08.180190       1 config.go:106] "Starting endpoint slice config controller"
	I1119 02:44:08.180194       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 02:44:08.280629       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1119 02:44:08.280705       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1119 02:44:08.280656       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [6dd757954ee069960d7775b0cb8053165f8ed7b87e78e24e092a5d8d6ad8c518] <==
	I1119 02:44:04.557347       1 serving.go:386] Generated self-signed cert in-memory
	W1119 02:44:06.571743       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1119 02:44:06.571893       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	W1119 02:44:06.572059       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1119 02:44:06.572076       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1119 02:44:06.611406       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1119 02:44:06.611451       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 02:44:06.617857       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 02:44:06.617897       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 02:44:06.618687       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1119 02:44:06.618781       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1119 02:44:06.718701       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 19 02:44:10 no-preload-837474 kubelet[714]: I1119 02:44:10.673513     714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/875975d2-e3f6-411f-88f9-8c4fa8628e09-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-8rhqr\" (UID: \"875975d2-e3f6-411f-88f9-8c4fa8628e09\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-8rhqr"
	Nov 19 02:44:16 no-preload-837474 kubelet[714]: I1119 02:44:16.387505     714 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 19 02:44:17 no-preload-837474 kubelet[714]: I1119 02:44:17.144303     714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-8rhqr" podStartSLOduration=1.498470998 podStartE2EDuration="7.144280539s" podCreationTimestamp="2025-11-19 02:44:10 +0000 UTC" firstStartedPulling="2025-11-19 02:44:10.887388642 +0000 UTC m=+8.527563668" lastFinishedPulling="2025-11-19 02:44:16.533198163 +0000 UTC m=+14.173373209" observedRunningTime="2025-11-19 02:44:16.638344621 +0000 UTC m=+14.278519660" watchObservedRunningTime="2025-11-19 02:44:17.144280539 +0000 UTC m=+14.784455591"
	Nov 19 02:44:19 no-preload-837474 kubelet[714]: I1119 02:44:19.635642     714 scope.go:117] "RemoveContainer" containerID="c3ef147f11e7853a0b7f0c7e61291aae54cafa7f5fc041da0163c1e18ccc5a18"
	Nov 19 02:44:20 no-preload-837474 kubelet[714]: I1119 02:44:20.640742     714 scope.go:117] "RemoveContainer" containerID="c3ef147f11e7853a0b7f0c7e61291aae54cafa7f5fc041da0163c1e18ccc5a18"
	Nov 19 02:44:20 no-preload-837474 kubelet[714]: I1119 02:44:20.640914     714 scope.go:117] "RemoveContainer" containerID="7fa55dd64ea5ea85dfe50bc0e21f0b8c225d3fc962df394ae297ab993dff0e45"
	Nov 19 02:44:20 no-preload-837474 kubelet[714]: E1119 02:44:20.641086     714 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-z5mvf_kubernetes-dashboard(904fbfe7-d798-4c74-914d-c2de2c4a3d83)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-z5mvf" podUID="904fbfe7-d798-4c74-914d-c2de2c4a3d83"
	Nov 19 02:44:21 no-preload-837474 kubelet[714]: I1119 02:44:21.645248     714 scope.go:117] "RemoveContainer" containerID="7fa55dd64ea5ea85dfe50bc0e21f0b8c225d3fc962df394ae297ab993dff0e45"
	Nov 19 02:44:21 no-preload-837474 kubelet[714]: E1119 02:44:21.645456     714 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-z5mvf_kubernetes-dashboard(904fbfe7-d798-4c74-914d-c2de2c4a3d83)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-z5mvf" podUID="904fbfe7-d798-4c74-914d-c2de2c4a3d83"
	Nov 19 02:44:24 no-preload-837474 kubelet[714]: I1119 02:44:24.287264     714 scope.go:117] "RemoveContainer" containerID="7fa55dd64ea5ea85dfe50bc0e21f0b8c225d3fc962df394ae297ab993dff0e45"
	Nov 19 02:44:24 no-preload-837474 kubelet[714]: E1119 02:44:24.287532     714 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-z5mvf_kubernetes-dashboard(904fbfe7-d798-4c74-914d-c2de2c4a3d83)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-z5mvf" podUID="904fbfe7-d798-4c74-914d-c2de2c4a3d83"
	Nov 19 02:44:37 no-preload-837474 kubelet[714]: I1119 02:44:37.532730     714 scope.go:117] "RemoveContainer" containerID="7fa55dd64ea5ea85dfe50bc0e21f0b8c225d3fc962df394ae297ab993dff0e45"
	Nov 19 02:44:37 no-preload-837474 kubelet[714]: I1119 02:44:37.685129     714 scope.go:117] "RemoveContainer" containerID="7fa55dd64ea5ea85dfe50bc0e21f0b8c225d3fc962df394ae297ab993dff0e45"
	Nov 19 02:44:37 no-preload-837474 kubelet[714]: I1119 02:44:37.685361     714 scope.go:117] "RemoveContainer" containerID="423bb4796dfff9afc99af43831f4867bd0432e350c892931fcb4c9a9f59f6ae0"
	Nov 19 02:44:37 no-preload-837474 kubelet[714]: E1119 02:44:37.685581     714 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-z5mvf_kubernetes-dashboard(904fbfe7-d798-4c74-914d-c2de2c4a3d83)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-z5mvf" podUID="904fbfe7-d798-4c74-914d-c2de2c4a3d83"
	Nov 19 02:44:38 no-preload-837474 kubelet[714]: I1119 02:44:38.691629     714 scope.go:117] "RemoveContainer" containerID="07e1d6dcc863e8f7d82680f4ba851550938cc3f8b8fb1b8080aac242af438836"
	Nov 19 02:44:44 no-preload-837474 kubelet[714]: I1119 02:44:44.287633     714 scope.go:117] "RemoveContainer" containerID="423bb4796dfff9afc99af43831f4867bd0432e350c892931fcb4c9a9f59f6ae0"
	Nov 19 02:44:44 no-preload-837474 kubelet[714]: E1119 02:44:44.287837     714 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-z5mvf_kubernetes-dashboard(904fbfe7-d798-4c74-914d-c2de2c4a3d83)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-z5mvf" podUID="904fbfe7-d798-4c74-914d-c2de2c4a3d83"
	Nov 19 02:44:56 no-preload-837474 kubelet[714]: I1119 02:44:56.532927     714 scope.go:117] "RemoveContainer" containerID="423bb4796dfff9afc99af43831f4867bd0432e350c892931fcb4c9a9f59f6ae0"
	Nov 19 02:44:56 no-preload-837474 kubelet[714]: E1119 02:44:56.533161     714 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-z5mvf_kubernetes-dashboard(904fbfe7-d798-4c74-914d-c2de2c4a3d83)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-z5mvf" podUID="904fbfe7-d798-4c74-914d-c2de2c4a3d83"
	Nov 19 02:45:00 no-preload-837474 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 19 02:45:00 no-preload-837474 kubelet[714]: I1119 02:45:00.037542     714 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 19 02:45:00 no-preload-837474 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 19 02:45:00 no-preload-837474 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 19 02:45:00 no-preload-837474 systemd[1]: kubelet.service: Consumed 1.740s CPU time.
	
	
	==> kubernetes-dashboard [bf02d90b6bbfa6d0799547f56beede01425bff1af868efbb4db1bed287e9ed6a] <==
	2025/11/19 02:44:16 Starting overwatch
	2025/11/19 02:44:16 Using namespace: kubernetes-dashboard
	2025/11/19 02:44:16 Using in-cluster config to connect to apiserver
	2025/11/19 02:44:16 Using secret token for csrf signing
	2025/11/19 02:44:16 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/19 02:44:16 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/19 02:44:16 Successful initial request to the apiserver, version: v1.34.1
	2025/11/19 02:44:16 Generating JWE encryption key
	2025/11/19 02:44:16 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/19 02:44:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/19 02:44:16 Initializing JWE encryption key from synchronized object
	2025/11/19 02:44:16 Creating in-cluster Sidecar client
	2025/11/19 02:44:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/19 02:44:16 Serving insecurely on HTTP port: 9090
	2025/11/19 02:44:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [07e1d6dcc863e8f7d82680f4ba851550938cc3f8b8fb1b8080aac242af438836] <==
	I1119 02:44:07.915043       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1119 02:44:37.917869       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [13ad6da0bd5e63f9266596b090ebf10732740126bcb0b39f60bde9195f6d11fd] <==
	I1119 02:44:38.740855       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1119 02:44:38.747641       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1119 02:44:38.747674       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1119 02:44:38.749391       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:44:42.204530       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:44:46.464619       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:44:50.063541       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:44:53.116887       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:44:56.140531       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:44:56.146107       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 02:44:56.146543       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1119 02:44:56.146779       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-837474_0537c0f0-3c0d-4766-a91a-6d618809c134!
	I1119 02:44:56.147190       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5860216f-7052-4908-a51f-f754ee84ec87", APIVersion:"v1", ResourceVersion:"671", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-837474_0537c0f0-3c0d-4766-a91a-6d618809c134 became leader
	W1119 02:44:56.149344       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:44:56.155872       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 02:44:56.247811       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-837474_0537c0f0-3c0d-4766-a91a-6d618809c134!
	W1119 02:44:58.159408       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:44:58.165011       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:45:00.167663       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:45:00.179263       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:45:02.182568       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:45:02.186176       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:45:04.188933       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:45:04.193661       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-837474 -n no-preload-837474
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-837474 -n no-preload-837474: exit status 2 (321.718815ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-837474 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (5.71s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (5.62s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-956139 --alsologtostderr -v=1
E1119 02:45:13.206988   14634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/calico-001617/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:45:13.213298   14634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/calico-001617/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:45:13.224593   14634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/calico-001617/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:45:13.245895   14634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/calico-001617/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:45:13.287256   14634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/calico-001617/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:45:13.368617   14634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/calico-001617/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:45:13.529876   14634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/calico-001617/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:45:13.851773   14634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/calico-001617/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:45:14.493994   14634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/calico-001617/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p newest-cni-956139 --alsologtostderr -v=1: exit status 80 (2.366282062s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-956139 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 02:45:12.674071  343484 out.go:360] Setting OutFile to fd 1 ...
	I1119 02:45:12.674300  343484 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:45:12.674308  343484 out.go:374] Setting ErrFile to fd 2...
	I1119 02:45:12.674319  343484 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:45:12.674504  343484 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-11126/.minikube/bin
	I1119 02:45:12.674715  343484 out.go:368] Setting JSON to false
	I1119 02:45:12.674760  343484 mustload.go:66] Loading cluster: newest-cni-956139
	I1119 02:45:12.675049  343484 config.go:182] Loaded profile config "newest-cni-956139": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:45:12.675395  343484 cli_runner.go:164] Run: docker container inspect newest-cni-956139 --format={{.State.Status}}
	I1119 02:45:12.692345  343484 host.go:66] Checking if "newest-cni-956139" exists ...
	I1119 02:45:12.692610  343484 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 02:45:12.752364  343484 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-11-19 02:45:12.742406375 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652060160 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 02:45:12.752996  343484 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-956139 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1119 02:45:12.755072  343484 out.go:179] * Pausing node newest-cni-956139 ... 
	I1119 02:45:12.756172  343484 host.go:66] Checking if "newest-cni-956139" exists ...
	I1119 02:45:12.756426  343484 ssh_runner.go:195] Run: systemctl --version
	I1119 02:45:12.756481  343484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956139
	I1119 02:45:12.773073  343484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/newest-cni-956139/id_rsa Username:docker}
	I1119 02:45:12.864496  343484 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 02:45:12.875990  343484 pause.go:52] kubelet running: true
	I1119 02:45:12.876041  343484 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1119 02:45:13.013965  343484 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1119 02:45:13.014047  343484 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1119 02:45:13.076326  343484 cri.go:89] found id: "036efdabb6a0eeacfb0cb5fd41704314387fa6013fc124eb80b7258169f8c320"
	I1119 02:45:13.076347  343484 cri.go:89] found id: "b3b0fe7af03c0bad361d36485f4c69783f18623cd9de030982bf08f865f3fad4"
	I1119 02:45:13.076351  343484 cri.go:89] found id: "03bea9a3d49a8d6bd560dac25fbaca2c60e0f475532d71b1f7e8403b3b5770a3"
	I1119 02:45:13.076354  343484 cri.go:89] found id: "003043020d52b3254d0fb8dce6b5de51f9100fc7116a74f3d11a53430cd79dfb"
	I1119 02:45:13.076357  343484 cri.go:89] found id: "2b5bcddd47a712b8ec423213ace360e69321fcb5d2ba522d16fede41bdbecf8f"
	I1119 02:45:13.076360  343484 cri.go:89] found id: "ce178759508ab503068b3436802db13c84f6f6f0a22e84e8542a3a203fc6cbba"
	I1119 02:45:13.076363  343484 cri.go:89] found id: ""
	I1119 02:45:13.076397  343484 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 02:45:13.087603  343484 retry.go:31] will retry after 361.475997ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:45:13Z" level=error msg="open /run/runc: no such file or directory"
	I1119 02:45:13.450212  343484 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 02:45:13.462631  343484 pause.go:52] kubelet running: false
	I1119 02:45:13.462677  343484 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1119 02:45:13.566037  343484 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1119 02:45:13.566113  343484 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1119 02:45:13.626181  343484 cri.go:89] found id: "036efdabb6a0eeacfb0cb5fd41704314387fa6013fc124eb80b7258169f8c320"
	I1119 02:45:13.626203  343484 cri.go:89] found id: "b3b0fe7af03c0bad361d36485f4c69783f18623cd9de030982bf08f865f3fad4"
	I1119 02:45:13.626209  343484 cri.go:89] found id: "03bea9a3d49a8d6bd560dac25fbaca2c60e0f475532d71b1f7e8403b3b5770a3"
	I1119 02:45:13.626214  343484 cri.go:89] found id: "003043020d52b3254d0fb8dce6b5de51f9100fc7116a74f3d11a53430cd79dfb"
	I1119 02:45:13.626217  343484 cri.go:89] found id: "2b5bcddd47a712b8ec423213ace360e69321fcb5d2ba522d16fede41bdbecf8f"
	I1119 02:45:13.626222  343484 cri.go:89] found id: "ce178759508ab503068b3436802db13c84f6f6f0a22e84e8542a3a203fc6cbba"
	I1119 02:45:13.626225  343484 cri.go:89] found id: ""
	I1119 02:45:13.626270  343484 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 02:45:13.637340  343484 retry.go:31] will retry after 380.935008ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:45:13Z" level=error msg="open /run/runc: no such file or directory"
	I1119 02:45:14.018960  343484 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 02:45:14.031092  343484 pause.go:52] kubelet running: false
	I1119 02:45:14.031161  343484 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1119 02:45:14.138205  343484 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1119 02:45:14.138304  343484 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1119 02:45:14.201261  343484 cri.go:89] found id: "036efdabb6a0eeacfb0cb5fd41704314387fa6013fc124eb80b7258169f8c320"
	I1119 02:45:14.201284  343484 cri.go:89] found id: "b3b0fe7af03c0bad361d36485f4c69783f18623cd9de030982bf08f865f3fad4"
	I1119 02:45:14.201288  343484 cri.go:89] found id: "03bea9a3d49a8d6bd560dac25fbaca2c60e0f475532d71b1f7e8403b3b5770a3"
	I1119 02:45:14.201290  343484 cri.go:89] found id: "003043020d52b3254d0fb8dce6b5de51f9100fc7116a74f3d11a53430cd79dfb"
	I1119 02:45:14.201293  343484 cri.go:89] found id: "2b5bcddd47a712b8ec423213ace360e69321fcb5d2ba522d16fede41bdbecf8f"
	I1119 02:45:14.201296  343484 cri.go:89] found id: "ce178759508ab503068b3436802db13c84f6f6f0a22e84e8542a3a203fc6cbba"
	I1119 02:45:14.201298  343484 cri.go:89] found id: ""
	I1119 02:45:14.201358  343484 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 02:45:14.214451  343484 retry.go:31] will retry after 563.218561ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:45:14Z" level=error msg="open /run/runc: no such file or directory"
	I1119 02:45:14.777871  343484 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 02:45:14.790099  343484 pause.go:52] kubelet running: false
	I1119 02:45:14.790171  343484 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1119 02:45:14.897998  343484 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1119 02:45:14.898102  343484 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1119 02:45:14.963623  343484 cri.go:89] found id: "036efdabb6a0eeacfb0cb5fd41704314387fa6013fc124eb80b7258169f8c320"
	I1119 02:45:14.963644  343484 cri.go:89] found id: "b3b0fe7af03c0bad361d36485f4c69783f18623cd9de030982bf08f865f3fad4"
	I1119 02:45:14.963648  343484 cri.go:89] found id: "03bea9a3d49a8d6bd560dac25fbaca2c60e0f475532d71b1f7e8403b3b5770a3"
	I1119 02:45:14.963651  343484 cri.go:89] found id: "003043020d52b3254d0fb8dce6b5de51f9100fc7116a74f3d11a53430cd79dfb"
	I1119 02:45:14.963653  343484 cri.go:89] found id: "2b5bcddd47a712b8ec423213ace360e69321fcb5d2ba522d16fede41bdbecf8f"
	I1119 02:45:14.963657  343484 cri.go:89] found id: "ce178759508ab503068b3436802db13c84f6f6f0a22e84e8542a3a203fc6cbba"
	I1119 02:45:14.963659  343484 cri.go:89] found id: ""
	I1119 02:45:14.963728  343484 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 02:45:14.977495  343484 out.go:203] 
	W1119 02:45:14.978780  343484 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:45:14Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:45:14Z" level=error msg="open /run/runc: no such file or directory"
	
	W1119 02:45:14.978800  343484 out.go:285] * 
	* 
	W1119 02:45:14.982782  343484 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1119 02:45:14.983993  343484 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p newest-cni-956139 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-956139
helpers_test.go:243: (dbg) docker inspect newest-cni-956139:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9939767f1de878098e5310b942eb9d11ec3f25e3b9ab60bfa602e352f92c64c3",
	        "Created": "2025-11-19T02:44:35.029315719Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 339702,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T02:45:01.314107768Z",
	            "FinishedAt": "2025-11-19T02:45:00.429678243Z"
	        },
	        "Image": "sha256:a368e3d71517ce17114afb6c9921965419df972dd0e2d32a9973a8946f0910a3",
	        "ResolvConfPath": "/var/lib/docker/containers/9939767f1de878098e5310b942eb9d11ec3f25e3b9ab60bfa602e352f92c64c3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9939767f1de878098e5310b942eb9d11ec3f25e3b9ab60bfa602e352f92c64c3/hostname",
	        "HostsPath": "/var/lib/docker/containers/9939767f1de878098e5310b942eb9d11ec3f25e3b9ab60bfa602e352f92c64c3/hosts",
	        "LogPath": "/var/lib/docker/containers/9939767f1de878098e5310b942eb9d11ec3f25e3b9ab60bfa602e352f92c64c3/9939767f1de878098e5310b942eb9d11ec3f25e3b9ab60bfa602e352f92c64c3-json.log",
	        "Name": "/newest-cni-956139",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-956139:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-956139",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9939767f1de878098e5310b942eb9d11ec3f25e3b9ab60bfa602e352f92c64c3",
	                "LowerDir": "/var/lib/docker/overlay2/e8113759fe0dc1846824f71b5017071cba92b91383000cb8145ce591dacbc603-init/diff:/var/lib/docker/overlay2/10dcbd04dd53463a9d8dc947ef22df9efa1b3a4d07707317c3869fa39d5b22a7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e8113759fe0dc1846824f71b5017071cba92b91383000cb8145ce591dacbc603/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e8113759fe0dc1846824f71b5017071cba92b91383000cb8145ce591dacbc603/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e8113759fe0dc1846824f71b5017071cba92b91383000cb8145ce591dacbc603/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-956139",
	                "Source": "/var/lib/docker/volumes/newest-cni-956139/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-956139",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-956139",
	                "name.minikube.sigs.k8s.io": "newest-cni-956139",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "c198b1f73e620dffd762b99390015e6f0324225742fb53a1e3570ad46c4c520a",
	            "SandboxKey": "/var/run/docker/netns/c198b1f73e62",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33133"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33134"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33137"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33135"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33136"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-956139": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c158809bc17a4fb99da40a7d719b98cd3e7fa529cc77ba53af7dfac4ad266e67",
	                    "EndpointID": "438f20f6cbc2ba15db06c896c725d201e146079b30f3f993009c3a5d429feac4",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "6a:0d:3d:6e:29:65",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-956139",
	                        "9939767f1de8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-956139 -n newest-cni-956139
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-956139 -n newest-cni-956139: exit status 2 (304.102314ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-956139 logs -n 25
E1119 02:45:15.775836   14634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/calico-001617/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p default-k8s-diff-port-167150 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-167150 │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:44 UTC │
	│ addons  │ enable dashboard -p no-preload-837474 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-837474            │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:43 UTC │
	│ start   │ -p no-preload-837474 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-837474            │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:44 UTC │
	│ image   │ old-k8s-version-987573 image list --format=json                                                                                                                                                                                               │ old-k8s-version-987573       │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │ 19 Nov 25 02:44 UTC │
	│ pause   │ -p old-k8s-version-987573 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-987573       │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │                     │
	│ delete  │ -p old-k8s-version-987573                                                                                                                                                                                                                     │ old-k8s-version-987573       │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │ 19 Nov 25 02:44 UTC │
	│ delete  │ -p old-k8s-version-987573                                                                                                                                                                                                                     │ old-k8s-version-987573       │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │ 19 Nov 25 02:44 UTC │
	│ start   │ -p newest-cni-956139 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-956139            │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │ 19 Nov 25 02:44 UTC │
	│ image   │ embed-certs-811173 image list --format=json                                                                                                                                                                                                   │ embed-certs-811173           │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │ 19 Nov 25 02:44 UTC │
	│ pause   │ -p embed-certs-811173 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-811173           │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │                     │
	│ delete  │ -p embed-certs-811173                                                                                                                                                                                                                         │ embed-certs-811173           │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │ 19 Nov 25 02:44 UTC │
	│ image   │ default-k8s-diff-port-167150 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-167150 │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │ 19 Nov 25 02:44 UTC │
	│ pause   │ -p default-k8s-diff-port-167150 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-167150 │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-956139 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-956139            │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │                     │
	│ delete  │ -p embed-certs-811173                                                                                                                                                                                                                         │ embed-certs-811173           │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │ 19 Nov 25 02:44 UTC │
	│ stop    │ -p newest-cni-956139 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-956139            │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │ 19 Nov 25 02:45 UTC │
	│ image   │ no-preload-837474 image list --format=json                                                                                                                                                                                                    │ no-preload-837474            │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │ 19 Nov 25 02:44 UTC │
	│ pause   │ -p no-preload-837474 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-837474            │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │                     │
	│ addons  │ enable dashboard -p newest-cni-956139 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-956139            │ jenkins │ v1.37.0 │ 19 Nov 25 02:45 UTC │ 19 Nov 25 02:45 UTC │
	│ start   │ -p newest-cni-956139 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-956139            │ jenkins │ v1.37.0 │ 19 Nov 25 02:45 UTC │ 19 Nov 25 02:45 UTC │
	│ delete  │ -p default-k8s-diff-port-167150                                                                                                                                                                                                               │ default-k8s-diff-port-167150 │ jenkins │ v1.37.0 │ 19 Nov 25 02:45 UTC │ 19 Nov 25 02:45 UTC │
	│ delete  │ -p no-preload-837474                                                                                                                                                                                                                          │ no-preload-837474            │ jenkins │ v1.37.0 │ 19 Nov 25 02:45 UTC │ 19 Nov 25 02:45 UTC │
	│ delete  │ -p no-preload-837474                                                                                                                                                                                                                          │ no-preload-837474            │ jenkins │ v1.37.0 │ 19 Nov 25 02:45 UTC │ 19 Nov 25 02:45 UTC │
	│ image   │ newest-cni-956139 image list --format=json                                                                                                                                                                                                    │ newest-cni-956139            │ jenkins │ v1.37.0 │ 19 Nov 25 02:45 UTC │ 19 Nov 25 02:45 UTC │
	│ pause   │ -p newest-cni-956139 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-956139            │ jenkins │ v1.37.0 │ 19 Nov 25 02:45 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 02:45:01
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 02:45:01.079633  339411 out.go:360] Setting OutFile to fd 1 ...
	I1119 02:45:01.079896  339411 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:45:01.079905  339411 out.go:374] Setting ErrFile to fd 2...
	I1119 02:45:01.079910  339411 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:45:01.080082  339411 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-11126/.minikube/bin
	I1119 02:45:01.080519  339411 out.go:368] Setting JSON to false
	I1119 02:45:01.081542  339411 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5248,"bootTime":1763515053,"procs":261,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 02:45:01.081600  339411 start.go:143] virtualization: kvm guest
	I1119 02:45:01.083571  339411 out.go:179] * [newest-cni-956139] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1119 02:45:01.087840  339411 notify.go:221] Checking for updates...
	I1119 02:45:01.089172  339411 out.go:179]   - MINIKUBE_LOCATION=21924
	I1119 02:45:01.090249  339411 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 02:45:01.091223  339411 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21924-11126/kubeconfig
	I1119 02:45:01.092319  339411 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-11126/.minikube
	I1119 02:45:01.093387  339411 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1119 02:45:01.094361  339411 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 02:45:01.095805  339411 config.go:182] Loaded profile config "newest-cni-956139": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:45:01.096303  339411 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 02:45:01.118824  339411 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1119 02:45:01.118911  339411 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 02:45:01.181407  339411 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:false NGoroutines:68 SystemTime:2025-11-19 02:45:01.171333163 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652060160 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 02:45:01.181546  339411 docker.go:319] overlay module found
	I1119 02:45:01.183364  339411 out.go:179] * Using the docker driver based on existing profile
	I1119 02:45:01.184618  339411 start.go:309] selected driver: docker
	I1119 02:45:01.184633  339411 start.go:930] validating driver "docker" against &{Name:newest-cni-956139 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-956139 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:45:01.184704  339411 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 02:45:01.185227  339411 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 02:45:01.247035  339411 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:43 OomKillDisable:false NGoroutines:62 SystemTime:2025-11-19 02:45:01.237379667 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652060160 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 02:45:01.247406  339411 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1119 02:45:01.247459  339411 cni.go:84] Creating CNI manager for ""
	I1119 02:45:01.247540  339411 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 02:45:01.247593  339411 start.go:353] cluster config:
	{Name:newest-cni-956139 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-956139 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:45:01.250071  339411 out.go:179] * Starting "newest-cni-956139" primary control-plane node in "newest-cni-956139" cluster
	I1119 02:45:01.251167  339411 cache.go:134] Beginning downloading kic base image for docker with crio
	I1119 02:45:01.252290  339411 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1119 02:45:01.253285  339411 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 02:45:01.253315  339411 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21924-11126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1119 02:45:01.253324  339411 cache.go:65] Caching tarball of preloaded images
	I1119 02:45:01.253382  339411 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1119 02:45:01.253423  339411 preload.go:238] Found /home/jenkins/minikube-integration/21924-11126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1119 02:45:01.253478  339411 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1119 02:45:01.253611  339411 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/config.json ...
	I1119 02:45:01.272884  339411 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1119 02:45:01.272903  339411 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1119 02:45:01.272916  339411 cache.go:243] Successfully downloaded all kic artifacts
	I1119 02:45:01.272936  339411 start.go:360] acquireMachinesLock for newest-cni-956139: {Name:mk15a132b2574a22e8a886ba5601ed901f63d00c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 02:45:01.272994  339411 start.go:364] duration metric: took 34.51µs to acquireMachinesLock for "newest-cni-956139"
	I1119 02:45:01.273009  339411 start.go:96] Skipping create...Using existing machine configuration
	I1119 02:45:01.273016  339411 fix.go:54] fixHost starting: 
	I1119 02:45:01.273196  339411 cli_runner.go:164] Run: docker container inspect newest-cni-956139 --format={{.State.Status}}
	I1119 02:45:01.289647  339411 fix.go:112] recreateIfNeeded on newest-cni-956139: state=Stopped err=<nil>
	W1119 02:45:01.289674  339411 fix.go:138] unexpected machine state, will restart: <nil>
	I1119 02:45:01.291301  339411 out.go:252] * Restarting existing docker container for "newest-cni-956139" ...
	I1119 02:45:01.291349  339411 cli_runner.go:164] Run: docker start newest-cni-956139
	I1119 02:45:01.578382  339411 cli_runner.go:164] Run: docker container inspect newest-cni-956139 --format={{.State.Status}}
	I1119 02:45:01.603013  339411 kic.go:430] container "newest-cni-956139" state is running.
	I1119 02:45:01.603482  339411 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-956139
	I1119 02:45:01.626380  339411 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/config.json ...
	I1119 02:45:01.626664  339411 machine.go:94] provisionDockerMachine start ...
	I1119 02:45:01.626750  339411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956139
	I1119 02:45:01.649062  339411 main.go:143] libmachine: Using SSH client type: native
	I1119 02:45:01.649329  339411 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1119 02:45:01.649448  339411 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 02:45:01.650180  339411 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59880->127.0.0.1:33133: read: connection reset by peer
	I1119 02:45:04.782405  339411 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-956139
	
	I1119 02:45:04.782453  339411 ubuntu.go:182] provisioning hostname "newest-cni-956139"
	I1119 02:45:04.782524  339411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956139
	I1119 02:45:04.801569  339411 main.go:143] libmachine: Using SSH client type: native
	I1119 02:45:04.801918  339411 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1119 02:45:04.801940  339411 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-956139 && echo "newest-cni-956139" | sudo tee /etc/hostname
	I1119 02:45:04.940427  339411 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-956139
	
	I1119 02:45:04.940521  339411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956139
	I1119 02:45:04.961142  339411 main.go:143] libmachine: Using SSH client type: native
	I1119 02:45:04.961461  339411 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1119 02:45:04.961492  339411 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-956139' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-956139/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-956139' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 02:45:05.092494  339411 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 02:45:05.092522  339411 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21924-11126/.minikube CaCertPath:/home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21924-11126/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21924-11126/.minikube}
	I1119 02:45:05.092564  339411 ubuntu.go:190] setting up certificates
	I1119 02:45:05.092573  339411 provision.go:84] configureAuth start
	I1119 02:45:05.092618  339411 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-956139
	I1119 02:45:05.109653  339411 provision.go:143] copyHostCerts
	I1119 02:45:05.109722  339411 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-11126/.minikube/ca.pem, removing ...
	I1119 02:45:05.109740  339411 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-11126/.minikube/ca.pem
	I1119 02:45:05.109816  339411 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21924-11126/.minikube/ca.pem (1082 bytes)
	I1119 02:45:05.109943  339411 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-11126/.minikube/cert.pem, removing ...
	I1119 02:45:05.109955  339411 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-11126/.minikube/cert.pem
	I1119 02:45:05.110000  339411 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21924-11126/.minikube/cert.pem (1123 bytes)
	I1119 02:45:05.110099  339411 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-11126/.minikube/key.pem, removing ...
	I1119 02:45:05.110110  339411 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-11126/.minikube/key.pem
	I1119 02:45:05.110149  339411 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21924-11126/.minikube/key.pem (1675 bytes)
	I1119 02:45:05.110223  339411 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21924-11126/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca-key.pem org=jenkins.newest-cni-956139 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-956139]
	I1119 02:45:06.040181  339411 provision.go:177] copyRemoteCerts
	I1119 02:45:06.040250  339411 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 02:45:06.040309  339411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956139
	I1119 02:45:06.057778  339411 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/newest-cni-956139/id_rsa Username:docker}
	I1119 02:45:06.150670  339411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1119 02:45:06.166823  339411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1119 02:45:06.182773  339411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1119 02:45:06.198548  339411 provision.go:87] duration metric: took 1.105962937s to configureAuth
	I1119 02:45:06.198571  339411 ubuntu.go:206] setting minikube options for container-runtime
	I1119 02:45:06.198736  339411 config.go:182] Loaded profile config "newest-cni-956139": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:45:06.198844  339411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956139
	I1119 02:45:06.217632  339411 main.go:143] libmachine: Using SSH client type: native
	I1119 02:45:06.217823  339411 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1119 02:45:06.217842  339411 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 02:45:06.491606  339411 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 02:45:06.491634  339411 machine.go:97] duration metric: took 4.86494951s to provisionDockerMachine
	I1119 02:45:06.491646  339411 start.go:293] postStartSetup for "newest-cni-956139" (driver="docker")
	I1119 02:45:06.491657  339411 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 02:45:06.491713  339411 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 02:45:06.491758  339411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956139
	I1119 02:45:06.509766  339411 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/newest-cni-956139/id_rsa Username:docker}
	I1119 02:45:06.602847  339411 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 02:45:06.605984  339411 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 02:45:06.606017  339411 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 02:45:06.606028  339411 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-11126/.minikube/addons for local assets ...
	I1119 02:45:06.606079  339411 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-11126/.minikube/files for local assets ...
	I1119 02:45:06.606155  339411 filesync.go:149] local asset: /home/jenkins/minikube-integration/21924-11126/.minikube/files/etc/ssl/certs/146342.pem -> 146342.pem in /etc/ssl/certs
	I1119 02:45:06.606236  339411 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 02:45:06.613102  339411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/files/etc/ssl/certs/146342.pem --> /etc/ssl/certs/146342.pem (1708 bytes)
	I1119 02:45:06.629391  339411 start.go:296] duration metric: took 137.732659ms for postStartSetup
	I1119 02:45:06.629476  339411 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 02:45:06.629521  339411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956139
	I1119 02:45:06.647458  339411 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/newest-cni-956139/id_rsa Username:docker}
	I1119 02:45:06.737831  339411 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 02:45:06.742173  339411 fix.go:56] duration metric: took 5.469149933s for fixHost
	I1119 02:45:06.742200  339411 start.go:83] releasing machines lock for "newest-cni-956139", held for 5.469195147s
	I1119 02:45:06.742256  339411 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-956139
	I1119 02:45:06.760619  339411 ssh_runner.go:195] Run: cat /version.json
	I1119 02:45:06.760661  339411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956139
	I1119 02:45:06.760732  339411 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 02:45:06.760783  339411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956139
	I1119 02:45:06.780106  339411 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/newest-cni-956139/id_rsa Username:docker}
	I1119 02:45:06.780941  339411 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/newest-cni-956139/id_rsa Username:docker}
	I1119 02:45:06.874379  339411 ssh_runner.go:195] Run: systemctl --version
	I1119 02:45:06.931706  339411 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 02:45:06.965640  339411 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 02:45:06.970209  339411 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 02:45:06.970271  339411 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 02:45:06.977657  339411 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1119 02:45:06.977675  339411 start.go:496] detecting cgroup driver to use...
	I1119 02:45:06.977699  339411 detect.go:190] detected "systemd" cgroup driver on host os
	I1119 02:45:06.977737  339411 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 02:45:06.990698  339411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 02:45:07.001855  339411 docker.go:218] disabling cri-docker service (if available) ...
	I1119 02:45:07.001920  339411 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 02:45:07.015158  339411 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 02:45:07.026200  339411 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 02:45:07.102576  339411 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 02:45:07.183456  339411 docker.go:234] disabling docker service ...
	I1119 02:45:07.183514  339411 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 02:45:07.198832  339411 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 02:45:07.211763  339411 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 02:45:07.293050  339411 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 02:45:07.373337  339411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 02:45:07.385485  339411 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 02:45:07.398486  339411 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 02:45:07.398535  339411 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:45:07.406473  339411 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1119 02:45:07.406531  339411 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:45:07.414471  339411 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:45:07.422357  339411 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:45:07.430343  339411 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 02:45:07.437857  339411 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:45:07.446164  339411 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:45:07.453877  339411 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:45:07.462010  339411 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 02:45:07.468849  339411 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 02:45:07.475479  339411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:45:07.551854  339411 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 02:45:07.721169  339411 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 02:45:07.721235  339411 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 02:45:07.726527  339411 start.go:564] Will wait 60s for crictl version
	I1119 02:45:07.726592  339411 ssh_runner.go:195] Run: which crictl
	I1119 02:45:07.729907  339411 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 02:45:07.752462  339411 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1119 02:45:07.752534  339411 ssh_runner.go:195] Run: crio --version
	I1119 02:45:07.779996  339411 ssh_runner.go:195] Run: crio --version
	I1119 02:45:07.814427  339411 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1119 02:45:07.816483  339411 cli_runner.go:164] Run: docker network inspect newest-cni-956139 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 02:45:07.835079  339411 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1119 02:45:07.839639  339411 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 02:45:07.853234  339411 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1119 02:45:07.854372  339411 kubeadm.go:884] updating cluster {Name:newest-cni-956139 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-956139 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 02:45:07.854510  339411 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 02:45:07.854558  339411 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 02:45:07.890029  339411 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 02:45:07.890046  339411 crio.go:433] Images already preloaded, skipping extraction
	I1119 02:45:07.890085  339411 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 02:45:07.918515  339411 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 02:45:07.918534  339411 cache_images.go:86] Images are preloaded, skipping loading
	I1119 02:45:07.918541  339411 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1119 02:45:07.918641  339411 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-956139 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-956139 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 02:45:07.918700  339411 ssh_runner.go:195] Run: crio config
	I1119 02:45:07.963287  339411 cni.go:84] Creating CNI manager for ""
	I1119 02:45:07.963313  339411 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 02:45:07.963329  339411 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1119 02:45:07.963359  339411 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-956139 NodeName:newest-cni-956139 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 02:45:07.963541  339411 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-956139"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 02:45:07.963602  339411 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 02:45:07.971811  339411 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 02:45:07.971873  339411 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 02:45:07.979777  339411 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1119 02:45:07.992762  339411 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 02:45:08.005532  339411 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1119 02:45:08.016987  339411 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1119 02:45:08.020448  339411 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 02:45:08.029864  339411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:45:08.117845  339411 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 02:45:08.148863  339411 certs.go:69] Setting up /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139 for IP: 192.168.76.2
	I1119 02:45:08.148882  339411 certs.go:195] generating shared ca certs ...
	I1119 02:45:08.148900  339411 certs.go:227] acquiring lock for ca certs: {Name:mkfa94aafd627ee4c1a185e05aa520339d3c22d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:45:08.149042  339411 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21924-11126/.minikube/ca.key
	I1119 02:45:08.149111  339411 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21924-11126/.minikube/proxy-client-ca.key
	I1119 02:45:08.149130  339411 certs.go:257] generating profile certs ...
	I1119 02:45:08.149245  339411 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/client.key
	I1119 02:45:08.149327  339411 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/apiserver.key.107d1e6d
	I1119 02:45:08.149378  339411 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/proxy-client.key
	I1119 02:45:08.149533  339411 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/14634.pem (1338 bytes)
	W1119 02:45:08.149577  339411 certs.go:480] ignoring /home/jenkins/minikube-integration/21924-11126/.minikube/certs/14634_empty.pem, impossibly tiny 0 bytes
	I1119 02:45:08.149588  339411 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca-key.pem (1671 bytes)
	I1119 02:45:08.149625  339411 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem (1082 bytes)
	I1119 02:45:08.149656  339411 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/cert.pem (1123 bytes)
	I1119 02:45:08.149690  339411 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/key.pem (1675 bytes)
	I1119 02:45:08.149750  339411 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/files/etc/ssl/certs/146342.pem (1708 bytes)
	I1119 02:45:08.150490  339411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 02:45:08.170609  339411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1119 02:45:08.189985  339411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 02:45:08.209416  339411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1119 02:45:08.235375  339411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1119 02:45:08.252887  339411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1119 02:45:08.269062  339411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 02:45:08.284703  339411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1119 02:45:08.300158  339411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/files/etc/ssl/certs/146342.pem --> /usr/share/ca-certificates/146342.pem (1708 bytes)
	I1119 02:45:08.315646  339411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 02:45:08.331090  339411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/certs/14634.pem --> /usr/share/ca-certificates/14634.pem (1338 bytes)
	I1119 02:45:08.347387  339411 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 02:45:08.358597  339411 ssh_runner.go:195] Run: openssl version
	I1119 02:45:08.364174  339411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 02:45:08.371687  339411 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:45:08.374967  339411 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 01:56 /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:45:08.375005  339411 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:45:08.407666  339411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 02:45:08.414564  339411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14634.pem && ln -fs /usr/share/ca-certificates/14634.pem /etc/ssl/certs/14634.pem"
	I1119 02:45:08.422608  339411 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14634.pem
	I1119 02:45:08.426026  339411 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 02:02 /usr/share/ca-certificates/14634.pem
	I1119 02:45:08.426070  339411 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14634.pem
	I1119 02:45:08.458986  339411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14634.pem /etc/ssl/certs/51391683.0"
	I1119 02:45:08.466035  339411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/146342.pem && ln -fs /usr/share/ca-certificates/146342.pem /etc/ssl/certs/146342.pem"
	I1119 02:45:08.473546  339411 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/146342.pem
	I1119 02:45:08.476848  339411 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 02:02 /usr/share/ca-certificates/146342.pem
	I1119 02:45:08.476881  339411 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/146342.pem
	I1119 02:45:08.509809  339411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/146342.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 02:45:08.516871  339411 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 02:45:08.520222  339411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1119 02:45:08.553900  339411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1119 02:45:08.588040  339411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1119 02:45:08.620691  339411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1119 02:45:08.655677  339411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1119 02:45:08.697408  339411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1119 02:45:08.745066  339411 kubeadm.go:401] StartCluster: {Name:newest-cni-956139 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-956139 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:45:08.745174  339411 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 02:45:08.745232  339411 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 02:45:08.783552  339411 cri.go:89] found id: "03bea9a3d49a8d6bd560dac25fbaca2c60e0f475532d71b1f7e8403b3b5770a3"
	I1119 02:45:08.783632  339411 cri.go:89] found id: "003043020d52b3254d0fb8dce6b5de51f9100fc7116a74f3d11a53430cd79dfb"
	I1119 02:45:08.783643  339411 cri.go:89] found id: "2b5bcddd47a712b8ec423213ace360e69321fcb5d2ba522d16fede41bdbecf8f"
	I1119 02:45:08.783647  339411 cri.go:89] found id: "ce178759508ab503068b3436802db13c84f6f6f0a22e84e8542a3a203fc6cbba"
	I1119 02:45:08.783652  339411 cri.go:89] found id: ""
	I1119 02:45:08.783697  339411 ssh_runner.go:195] Run: sudo runc list -f json
	W1119 02:45:08.797371  339411 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:45:08Z" level=error msg="open /run/runc: no such file or directory"
	I1119 02:45:08.797509  339411 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 02:45:08.807910  339411 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1119 02:45:08.807932  339411 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1119 02:45:08.807982  339411 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1119 02:45:08.817546  339411 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1119 02:45:08.818026  339411 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-956139" does not appear in /home/jenkins/minikube-integration/21924-11126/kubeconfig
	I1119 02:45:08.818196  339411 kubeconfig.go:62] /home/jenkins/minikube-integration/21924-11126/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-956139" cluster setting kubeconfig missing "newest-cni-956139" context setting]
	I1119 02:45:08.818679  339411 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/kubeconfig: {Name:mkb0d0ec188d51add3fa67ae624c9c11581068d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:45:08.820195  339411 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1119 02:45:08.827993  339411 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1119 02:45:08.828024  339411 kubeadm.go:602] duration metric: took 20.086273ms to restartPrimaryControlPlane
	I1119 02:45:08.828039  339411 kubeadm.go:403] duration metric: took 82.993201ms to StartCluster
	I1119 02:45:08.828051  339411 settings.go:142] acquiring lock: {Name:mk39bd61177f2f8808b442f427cccc3032092975 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:45:08.828096  339411 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21924-11126/kubeconfig
	I1119 02:45:08.828568  339411 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/kubeconfig: {Name:mkb0d0ec188d51add3fa67ae624c9c11581068d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:45:08.828741  339411 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 02:45:08.828815  339411 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 02:45:08.828916  339411 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-956139"
	I1119 02:45:08.828936  339411 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-956139"
	I1119 02:45:08.828936  339411 addons.go:70] Setting dashboard=true in profile "newest-cni-956139"
	I1119 02:45:08.828962  339411 addons.go:239] Setting addon dashboard=true in "newest-cni-956139"
	W1119 02:45:08.828969  339411 addons.go:248] addon dashboard should already be in state true
	I1119 02:45:08.828970  339411 addons.go:70] Setting default-storageclass=true in profile "newest-cni-956139"
	I1119 02:45:08.828995  339411 config.go:182] Loaded profile config "newest-cni-956139": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:45:08.829002  339411 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-956139"
	I1119 02:45:08.829000  339411 host.go:66] Checking if "newest-cni-956139" exists ...
	W1119 02:45:08.828944  339411 addons.go:248] addon storage-provisioner should already be in state true
	I1119 02:45:08.829164  339411 host.go:66] Checking if "newest-cni-956139" exists ...
	I1119 02:45:08.829358  339411 cli_runner.go:164] Run: docker container inspect newest-cni-956139 --format={{.State.Status}}
	I1119 02:45:08.829562  339411 cli_runner.go:164] Run: docker container inspect newest-cni-956139 --format={{.State.Status}}
	I1119 02:45:08.829627  339411 cli_runner.go:164] Run: docker container inspect newest-cni-956139 --format={{.State.Status}}
	I1119 02:45:08.831644  339411 out.go:179] * Verifying Kubernetes components...
	I1119 02:45:08.832960  339411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:45:08.853677  339411 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 02:45:08.853727  339411 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1119 02:45:08.853803  339411 addons.go:239] Setting addon default-storageclass=true in "newest-cni-956139"
	W1119 02:45:08.853823  339411 addons.go:248] addon default-storageclass should already be in state true
	I1119 02:45:08.853844  339411 host.go:66] Checking if "newest-cni-956139" exists ...
	I1119 02:45:08.854243  339411 cli_runner.go:164] Run: docker container inspect newest-cni-956139 --format={{.State.Status}}
	I1119 02:45:08.854941  339411 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 02:45:08.854959  339411 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 02:45:08.855005  339411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956139
	I1119 02:45:08.856084  339411 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1119 02:45:08.857243  339411 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1119 02:45:08.857261  339411 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1119 02:45:08.857316  339411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956139
	I1119 02:45:08.886835  339411 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 02:45:08.886863  339411 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 02:45:08.886928  339411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956139
	I1119 02:45:08.887596  339411 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/newest-cni-956139/id_rsa Username:docker}
	I1119 02:45:08.890367  339411 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/newest-cni-956139/id_rsa Username:docker}
	I1119 02:45:08.910379  339411 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/newest-cni-956139/id_rsa Username:docker}
	I1119 02:45:08.965967  339411 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 02:45:08.978553  339411 api_server.go:52] waiting for apiserver process to appear ...
	I1119 02:45:08.978624  339411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 02:45:08.991834  339411 api_server.go:72] duration metric: took 163.071285ms to wait for apiserver process to appear ...
	I1119 02:45:08.991855  339411 api_server.go:88] waiting for apiserver healthz status ...
	I1119 02:45:08.991872  339411 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 02:45:08.997704  339411 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 02:45:09.001252  339411 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1119 02:45:09.001268  339411 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1119 02:45:09.014955  339411 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1119 02:45:09.014976  339411 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1119 02:45:09.017866  339411 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 02:45:09.029699  339411 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1119 02:45:09.029720  339411 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1119 02:45:09.045904  339411 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1119 02:45:09.045922  339411 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1119 02:45:09.062840  339411 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1119 02:45:09.062864  339411 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1119 02:45:09.079275  339411 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1119 02:45:09.079294  339411 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1119 02:45:09.091631  339411 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1119 02:45:09.091648  339411 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1119 02:45:09.103051  339411 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1119 02:45:09.103068  339411 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1119 02:45:09.115108  339411 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1119 02:45:09.115124  339411 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1119 02:45:09.128183  339411 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1119 02:45:10.370345  339411 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1119 02:45:10.370374  339411 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1119 02:45:10.370389  339411 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 02:45:10.425600  339411 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W1119 02:45:10.425629  339411 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I1119 02:45:10.492610  339411 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 02:45:10.497517  339411 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 02:45:10.497542  339411 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 02:45:10.911644  339411 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.913909438s)
	I1119 02:45:10.911711  339411 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.893787367s)
	I1119 02:45:10.911822  339411 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.783600631s)
	I1119 02:45:10.913676  339411 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-956139 addons enable metrics-server
	
	I1119 02:45:10.921887  339411 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1119 02:45:10.923112  339411 addons.go:515] duration metric: took 2.094309129s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1119 02:45:10.992869  339411 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 02:45:10.996479  339411 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 02:45:10.996506  339411 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 02:45:11.492151  339411 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 02:45:11.496415  339411 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 02:45:11.496456  339411 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 02:45:11.991985  339411 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 02:45:11.996575  339411 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1119 02:45:11.997533  339411 api_server.go:141] control plane version: v1.34.1
	I1119 02:45:11.997564  339411 api_server.go:131] duration metric: took 3.005701105s to wait for apiserver health ...
	I1119 02:45:11.997575  339411 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 02:45:12.001155  339411 system_pods.go:59] 8 kube-system pods found
	I1119 02:45:12.001191  339411 system_pods.go:61] "coredns-66bc5c9577-l7vmx" [0d704d05-424c-4c54-bdf6-a5ec01cbcbf8] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1119 02:45:12.001201  339411 system_pods.go:61] "etcd-newest-cni-956139" [724e0280-bcab-4c1e-aae3-5a7a72519d23] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1119 02:45:12.001216  339411 system_pods.go:61] "kindnet-s65nc" [20583cba-5129-470f-b6f9-869642b28f93] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1119 02:45:12.001228  339411 system_pods.go:61] "kube-apiserver-newest-cni-956139" [a81fa4fa-fea5-4996-9230-94e06fb3b276] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1119 02:45:12.001240  339411 system_pods.go:61] "kube-controller-manager-newest-cni-956139" [a93f6b9a-946c-4099-bbc0-139db17304e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1119 02:45:12.001251  339411 system_pods.go:61] "kube-proxy-7frpm" [7f447bc0-73e5-4008-b474-551b69553ce3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1119 02:45:12.001256  339411 system_pods.go:61] "kube-scheduler-newest-cni-956139" [ebd7110b-7108-4bca-b86d-c7126087da9f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1119 02:45:12.001263  339411 system_pods.go:61] "storage-provisioner" [b8a81262-3433-4dd4-a802-58a9b4440545] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1119 02:45:12.001270  339411 system_pods.go:74] duration metric: took 3.688105ms to wait for pod list to return data ...
	I1119 02:45:12.001283  339411 default_sa.go:34] waiting for default service account to be created ...
	I1119 02:45:12.003642  339411 default_sa.go:45] found service account: "default"
	I1119 02:45:12.003658  339411 default_sa.go:55] duration metric: took 2.369724ms for default service account to be created ...
	I1119 02:45:12.003668  339411 kubeadm.go:587] duration metric: took 3.174909379s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1119 02:45:12.003688  339411 node_conditions.go:102] verifying NodePressure condition ...
	I1119 02:45:12.006040  339411 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1119 02:45:12.006067  339411 node_conditions.go:123] node cpu capacity is 8
	I1119 02:45:12.006084  339411 node_conditions.go:105] duration metric: took 2.39132ms to run NodePressure ...
	I1119 02:45:12.006099  339411 start.go:242] waiting for startup goroutines ...
	I1119 02:45:12.006109  339411 start.go:247] waiting for cluster config update ...
	I1119 02:45:12.006125  339411 start.go:256] writing updated cluster config ...
	I1119 02:45:12.006455  339411 ssh_runner.go:195] Run: rm -f paused
	I1119 02:45:12.057363  339411 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1119 02:45:12.060472  339411 out.go:179] * Done! kubectl is now configured to use "newest-cni-956139" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 19 02:45:11 newest-cni-956139 crio[517]: time="2025-11-19T02:45:11.522806954Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:45:11 newest-cni-956139 crio[517]: time="2025-11-19T02:45:11.527929023Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=af712ab4-eeb5-40f2-91a3-8e0352ce80ea name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 02:45:11 newest-cni-956139 crio[517]: time="2025-11-19T02:45:11.528399423Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=c5e37559-18b3-4add-8015-220b18d3a83b name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 02:45:11 newest-cni-956139 crio[517]: time="2025-11-19T02:45:11.529625275Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 19 02:45:11 newest-cni-956139 crio[517]: time="2025-11-19T02:45:11.530189928Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 19 02:45:11 newest-cni-956139 crio[517]: time="2025-11-19T02:45:11.530604893Z" level=info msg="Ran pod sandbox c3e9bd684e0f4174936282dd00c75c393907f1dc65241c3d174b34677b0f6848 with infra container: kube-system/kindnet-s65nc/POD" id=af712ab4-eeb5-40f2-91a3-8e0352ce80ea name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 02:45:11 newest-cni-956139 crio[517]: time="2025-11-19T02:45:11.531038008Z" level=info msg="Ran pod sandbox a3e1c159ae94a252481b5b61b27ca32a579a37450eed5994230e58c02f765c06 with infra container: kube-system/kube-proxy-7frpm/POD" id=c5e37559-18b3-4add-8015-220b18d3a83b name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 02:45:11 newest-cni-956139 crio[517]: time="2025-11-19T02:45:11.531662364Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=e6b15355-a506-46b9-a78a-6df11b3e6dc8 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 02:45:11 newest-cni-956139 crio[517]: time="2025-11-19T02:45:11.532107215Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=4f214618-8d2c-43be-809a-ca6b8e4ff142 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 02:45:11 newest-cni-956139 crio[517]: time="2025-11-19T02:45:11.532526183Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=6eae5b18-4bb1-45dd-ba98-36741dc809a2 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 02:45:11 newest-cni-956139 crio[517]: time="2025-11-19T02:45:11.533014891Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=fa400924-2868-469f-8924-2743ddfe2669 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 02:45:11 newest-cni-956139 crio[517]: time="2025-11-19T02:45:11.533651887Z" level=info msg="Creating container: kube-system/kindnet-s65nc/kindnet-cni" id=8df5d4b2-ef5b-4f0b-827c-7b91f408e669 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 02:45:11 newest-cni-956139 crio[517]: time="2025-11-19T02:45:11.533747497Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:45:11 newest-cni-956139 crio[517]: time="2025-11-19T02:45:11.53386648Z" level=info msg="Creating container: kube-system/kube-proxy-7frpm/kube-proxy" id=d6e24787-5ff5-42ca-b141-9b5b703f6c46 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 02:45:11 newest-cni-956139 crio[517]: time="2025-11-19T02:45:11.533981872Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:45:11 newest-cni-956139 crio[517]: time="2025-11-19T02:45:11.538586125Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:45:11 newest-cni-956139 crio[517]: time="2025-11-19T02:45:11.539064134Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:45:11 newest-cni-956139 crio[517]: time="2025-11-19T02:45:11.540487588Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:45:11 newest-cni-956139 crio[517]: time="2025-11-19T02:45:11.540904743Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:45:11 newest-cni-956139 crio[517]: time="2025-11-19T02:45:11.56645761Z" level=info msg="Created container b3b0fe7af03c0bad361d36485f4c69783f18623cd9de030982bf08f865f3fad4: kube-system/kindnet-s65nc/kindnet-cni" id=8df5d4b2-ef5b-4f0b-827c-7b91f408e669 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 02:45:11 newest-cni-956139 crio[517]: time="2025-11-19T02:45:11.567079889Z" level=info msg="Starting container: b3b0fe7af03c0bad361d36485f4c69783f18623cd9de030982bf08f865f3fad4" id=52283d34-7592-4a8b-8220-21bbc0b61e8b name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 02:45:11 newest-cni-956139 crio[517]: time="2025-11-19T02:45:11.569256999Z" level=info msg="Started container" PID=1047 containerID=b3b0fe7af03c0bad361d36485f4c69783f18623cd9de030982bf08f865f3fad4 description=kube-system/kindnet-s65nc/kindnet-cni id=52283d34-7592-4a8b-8220-21bbc0b61e8b name=/runtime.v1.RuntimeService/StartContainer sandboxID=c3e9bd684e0f4174936282dd00c75c393907f1dc65241c3d174b34677b0f6848
	Nov 19 02:45:11 newest-cni-956139 crio[517]: time="2025-11-19T02:45:11.569753395Z" level=info msg="Created container 036efdabb6a0eeacfb0cb5fd41704314387fa6013fc124eb80b7258169f8c320: kube-system/kube-proxy-7frpm/kube-proxy" id=d6e24787-5ff5-42ca-b141-9b5b703f6c46 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 02:45:11 newest-cni-956139 crio[517]: time="2025-11-19T02:45:11.57029954Z" level=info msg="Starting container: 036efdabb6a0eeacfb0cb5fd41704314387fa6013fc124eb80b7258169f8c320" id=8ef46e4d-c348-464a-9427-b01ad2f333fe name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 02:45:11 newest-cni-956139 crio[517]: time="2025-11-19T02:45:11.573184523Z" level=info msg="Started container" PID=1048 containerID=036efdabb6a0eeacfb0cb5fd41704314387fa6013fc124eb80b7258169f8c320 description=kube-system/kube-proxy-7frpm/kube-proxy id=8ef46e4d-c348-464a-9427-b01ad2f333fe name=/runtime.v1.RuntimeService/StartContainer sandboxID=a3e1c159ae94a252481b5b61b27ca32a579a37450eed5994230e58c02f765c06
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	036efdabb6a0e       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   4 seconds ago       Running             kube-proxy                1                   a3e1c159ae94a       kube-proxy-7frpm                            kube-system
	b3b0fe7af03c0       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   4 seconds ago       Running             kindnet-cni               1                   c3e9bd684e0f4       kindnet-s65nc                               kube-system
	03bea9a3d49a8       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   7 seconds ago       Running             kube-scheduler            1                   9f84555372005       kube-scheduler-newest-cni-956139            kube-system
	003043020d52b       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   7 seconds ago       Running             kube-apiserver            1                   f8aad44e1040f       kube-apiserver-newest-cni-956139            kube-system
	2b5bcddd47a71       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   7 seconds ago       Running             kube-controller-manager   1                   9e438524c0add       kube-controller-manager-newest-cni-956139   kube-system
	ce178759508ab       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   7 seconds ago       Running             etcd                      1                   cac2271a6b593       etcd-newest-cni-956139                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-956139
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-956139
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ebd49efe8b7d44f89fc96e21beffcd64dfee8277
	                    minikube.k8s.io/name=newest-cni-956139
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T02_44_50_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 02:44:46 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-956139
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 02:45:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 02:45:10 +0000   Wed, 19 Nov 2025 02:44:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 02:45:10 +0000   Wed, 19 Nov 2025 02:44:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 02:45:10 +0000   Wed, 19 Nov 2025 02:44:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Wed, 19 Nov 2025 02:45:10 +0000   Wed, 19 Nov 2025 02:44:45 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-956139
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863340Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863340Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf10fb2f940d419c1d138723691cfee8
	  System UUID:                cae1bba0-7daf-47af-a2b2-8c3f8909ef7d
	  Boot ID:                    b74e7f3d-91af-4c1b-82f6-1745dfd2e678
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-956139                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         26s
	  kube-system                 kindnet-s65nc                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      21s
	  kube-system                 kube-apiserver-newest-cni-956139             250m (3%)     0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-controller-manager-newest-cni-956139    200m (2%)     0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-proxy-7frpm                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         21s
	  kube-system                 kube-scheduler-newest-cni-956139             100m (1%)     0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age              From             Message
	  ----    ------                   ----             ----             -------
	  Normal  Starting                 20s              kube-proxy       
	  Normal  Starting                 4s               kube-proxy       
	  Normal  Starting                 26s              kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  26s              kubelet          Node newest-cni-956139 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    26s              kubelet          Node newest-cni-956139 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     26s              kubelet          Node newest-cni-956139 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           22s              node-controller  Node newest-cni-956139 event: Registered Node newest-cni-956139 in Controller
	  Normal  Starting                 7s               kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7s (x8 over 7s)  kubelet          Node newest-cni-956139 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7s (x8 over 7s)  kubelet          Node newest-cni-956139 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7s (x8 over 7s)  kubelet          Node newest-cni-956139 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2s               node-controller  Node newest-cni-956139 event: Registered Node newest-cni-956139 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: c6 2a 10 38 98 ce 7e af 25 93 50 8b 08 00
	[Nov19 02:40] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 69 19 13 d2 34 08 06
	[  +0.000303] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff de 82 c7 57 ef 49 08 06
	[Nov19 02:41] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 82 b6 eb cd 7e a7 08 06
	[  +0.001170] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 6a 20 a4 3b 82 10 08 06
	[ +12.842438] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 1e 35 97 65 5e 2e 08 06
	[  +4.187285] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ba 20 d4 3c 25 0c 08 06
	[ +19.742639] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6e e8 d1 08 45 d2 08 06
	[  +0.000327] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ba 20 d4 3c 25 0c 08 06
	[Nov19 02:42] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 2b 58 8a 05 dc 08 06
	[  +0.000340] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 b6 eb cd 7e a7 08 06
	[ +10.661146] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b6 1d bb 8d c6 48 08 06
	[  +0.000382] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 1e 35 97 65 5e 2e 08 06
	
	
	==> etcd [ce178759508ab503068b3436802db13c84f6f6f0a22e84e8542a3a203fc6cbba] <==
	{"level":"warn","ts":"2025-11-19T02:45:09.774651Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:45:09.781287Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:45:09.788143Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59922","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:45:09.797753Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:45:09.811964Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:45:09.818864Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:45:09.825220Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:45:09.832221Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:45:09.839795Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:45:09.846116Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:45:09.854566Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:45:09.860541Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:45:09.866522Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:45:09.873579Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:45:09.880366Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60098","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:45:09.886242Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:45:09.892553Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:45:09.898609Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:45:09.904309Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:45:09.910443Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:45:09.916374Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:45:09.922356Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:45:09.937535Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:45:09.943491Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:45:09.952604Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60276","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 02:45:15 up  1:27,  0 user,  load average: 2.99, 3.23, 2.29
	Linux newest-cni-956139 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b3b0fe7af03c0bad361d36485f4c69783f18623cd9de030982bf08f865f3fad4] <==
	I1119 02:45:11.748402       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 02:45:11.748641       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1119 02:45:11.748762       1 main.go:148] setting mtu 1500 for CNI 
	I1119 02:45:11.748777       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 02:45:11.748800       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T02:45:11Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 02:45:11.947217       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 02:45:11.947240       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 02:45:11.947248       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 02:45:12.026559       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1119 02:45:12.247980       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1119 02:45:12.248018       1 metrics.go:72] Registering metrics
	I1119 02:45:12.248105       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [003043020d52b3254d0fb8dce6b5de51f9100fc7116a74f3d11a53430cd79dfb] <==
	I1119 02:45:10.456848       1 aggregator.go:171] initial CRD sync complete...
	I1119 02:45:10.456856       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1119 02:45:10.456861       1 autoregister_controller.go:144] Starting autoregister controller
	I1119 02:45:10.456869       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1119 02:45:10.456875       1 cache.go:39] Caches are synced for autoregister controller
	I1119 02:45:10.456900       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1119 02:45:10.457008       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1119 02:45:10.457016       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1119 02:45:10.457241       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1119 02:45:10.457244       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1119 02:45:10.457717       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1119 02:45:10.457832       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1119 02:45:10.465955       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1119 02:45:10.510289       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 02:45:10.726308       1 controller.go:667] quota admission added evaluator for: namespaces
	I1119 02:45:10.750715       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1119 02:45:10.766404       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1119 02:45:10.772144       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1119 02:45:10.778351       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1119 02:45:10.806180       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.178.9"}
	I1119 02:45:10.817031       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.236.127"}
	I1119 02:45:11.360041       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 02:45:14.148370       1 controller.go:667] quota admission added evaluator for: endpoints
	I1119 02:45:14.199681       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1119 02:45:14.246857       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [2b5bcddd47a712b8ec423213ace360e69321fcb5d2ba522d16fede41bdbecf8f] <==
	I1119 02:45:13.795485       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1119 02:45:13.795509       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1119 02:45:13.795537       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1119 02:45:13.795547       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1119 02:45:13.795561       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1119 02:45:13.795565       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1119 02:45:13.795605       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1119 02:45:13.798566       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 02:45:13.801713       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1119 02:45:13.806906       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1119 02:45:13.806958       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1119 02:45:13.807006       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1119 02:45:13.807018       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1119 02:45:13.807025       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1119 02:45:13.809172       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1119 02:45:13.810328       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1119 02:45:13.814570       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1119 02:45:13.816805       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1119 02:45:13.816895       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1119 02:45:13.816978       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-956139"
	I1119 02:45:13.817030       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1119 02:45:13.818079       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1119 02:45:13.820152       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1119 02:45:13.822357       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1119 02:45:13.827642       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [036efdabb6a0eeacfb0cb5fd41704314387fa6013fc124eb80b7258169f8c320] <==
	I1119 02:45:11.604128       1 server_linux.go:53] "Using iptables proxy"
	I1119 02:45:11.670960       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 02:45:11.771715       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 02:45:11.771765       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1119 02:45:11.771906       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 02:45:11.788913       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 02:45:11.788971       1 server_linux.go:132] "Using iptables Proxier"
	I1119 02:45:11.793805       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 02:45:11.794222       1 server.go:527] "Version info" version="v1.34.1"
	I1119 02:45:11.794257       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 02:45:11.795569       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 02:45:11.795590       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 02:45:11.795658       1 config.go:200] "Starting service config controller"
	I1119 02:45:11.795674       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 02:45:11.795704       1 config.go:106] "Starting endpoint slice config controller"
	I1119 02:45:11.795712       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 02:45:11.795717       1 config.go:309] "Starting node config controller"
	I1119 02:45:11.795727       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 02:45:11.895794       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 02:45:11.895821       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1119 02:45:11.895804       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1119 02:45:11.895840       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [03bea9a3d49a8d6bd560dac25fbaca2c60e0f475532d71b1f7e8403b3b5770a3] <==
	I1119 02:45:09.480460       1 serving.go:386] Generated self-signed cert in-memory
	I1119 02:45:10.446460       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1119 02:45:10.446486       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 02:45:10.451322       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1119 02:45:10.451327       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 02:45:10.451362       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 02:45:10.451331       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1119 02:45:10.451383       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1119 02:45:10.451365       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1119 02:45:10.451783       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1119 02:45:10.451810       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1119 02:45:10.552515       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 02:45:10.552521       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1119 02:45:10.552713       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kubelet <==
	Nov 19 02:45:10 newest-cni-956139 kubelet[671]: E1119 02:45:10.252279     671 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-956139\" not found" node="newest-cni-956139"
	Nov 19 02:45:10 newest-cni-956139 kubelet[671]: E1119 02:45:10.252402     671 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-956139\" not found" node="newest-cni-956139"
	Nov 19 02:45:10 newest-cni-956139 kubelet[671]: E1119 02:45:10.252563     671 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-956139\" not found" node="newest-cni-956139"
	Nov 19 02:45:10 newest-cni-956139 kubelet[671]: I1119 02:45:10.420525     671 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-956139"
	Nov 19 02:45:10 newest-cni-956139 kubelet[671]: I1119 02:45:10.487892     671 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-956139"
	Nov 19 02:45:10 newest-cni-956139 kubelet[671]: I1119 02:45:10.487989     671 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-956139"
	Nov 19 02:45:10 newest-cni-956139 kubelet[671]: I1119 02:45:10.488023     671 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 19 02:45:10 newest-cni-956139 kubelet[671]: I1119 02:45:10.488906     671 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 19 02:45:10 newest-cni-956139 kubelet[671]: E1119 02:45:10.539000     671 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-956139\" already exists" pod="kube-system/kube-scheduler-newest-cni-956139"
	Nov 19 02:45:10 newest-cni-956139 kubelet[671]: I1119 02:45:10.539036     671 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-956139"
	Nov 19 02:45:10 newest-cni-956139 kubelet[671]: E1119 02:45:10.544899     671 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-956139\" already exists" pod="kube-system/etcd-newest-cni-956139"
	Nov 19 02:45:10 newest-cni-956139 kubelet[671]: I1119 02:45:10.544936     671 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-956139"
	Nov 19 02:45:10 newest-cni-956139 kubelet[671]: E1119 02:45:10.550119     671 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-956139\" already exists" pod="kube-system/kube-apiserver-newest-cni-956139"
	Nov 19 02:45:10 newest-cni-956139 kubelet[671]: I1119 02:45:10.550273     671 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-956139"
	Nov 19 02:45:10 newest-cni-956139 kubelet[671]: E1119 02:45:10.557685     671 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-956139\" already exists" pod="kube-system/kube-controller-manager-newest-cni-956139"
	Nov 19 02:45:11 newest-cni-956139 kubelet[671]: I1119 02:45:11.214248     671 apiserver.go:52] "Watching apiserver"
	Nov 19 02:45:11 newest-cni-956139 kubelet[671]: I1119 02:45:11.316645     671 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 19 02:45:11 newest-cni-956139 kubelet[671]: I1119 02:45:11.412557     671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7f447bc0-73e5-4008-b474-551b69553ce3-lib-modules\") pod \"kube-proxy-7frpm\" (UID: \"7f447bc0-73e5-4008-b474-551b69553ce3\") " pod="kube-system/kube-proxy-7frpm"
	Nov 19 02:45:11 newest-cni-956139 kubelet[671]: I1119 02:45:11.412727     671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7f447bc0-73e5-4008-b474-551b69553ce3-xtables-lock\") pod \"kube-proxy-7frpm\" (UID: \"7f447bc0-73e5-4008-b474-551b69553ce3\") " pod="kube-system/kube-proxy-7frpm"
	Nov 19 02:45:11 newest-cni-956139 kubelet[671]: I1119 02:45:11.412828     671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/20583cba-5129-470f-b6f9-869642b28f93-cni-cfg\") pod \"kindnet-s65nc\" (UID: \"20583cba-5129-470f-b6f9-869642b28f93\") " pod="kube-system/kindnet-s65nc"
	Nov 19 02:45:11 newest-cni-956139 kubelet[671]: I1119 02:45:11.412875     671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/20583cba-5129-470f-b6f9-869642b28f93-lib-modules\") pod \"kindnet-s65nc\" (UID: \"20583cba-5129-470f-b6f9-869642b28f93\") " pod="kube-system/kindnet-s65nc"
	Nov 19 02:45:11 newest-cni-956139 kubelet[671]: I1119 02:45:11.412917     671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/20583cba-5129-470f-b6f9-869642b28f93-xtables-lock\") pod \"kindnet-s65nc\" (UID: \"20583cba-5129-470f-b6f9-869642b28f93\") " pod="kube-system/kindnet-s65nc"
	Nov 19 02:45:12 newest-cni-956139 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 19 02:45:13 newest-cni-956139 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 19 02:45:13 newest-cni-956139 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-956139 -n newest-cni-956139
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-956139 -n newest-cni-956139: exit status 2 (304.268105ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-956139 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-l7vmx storage-provisioner dashboard-metrics-scraper-6ffb444bf9-5nfsz kubernetes-dashboard-855c9754f9-7frml
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-956139 describe pod coredns-66bc5c9577-l7vmx storage-provisioner dashboard-metrics-scraper-6ffb444bf9-5nfsz kubernetes-dashboard-855c9754f9-7frml
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-956139 describe pod coredns-66bc5c9577-l7vmx storage-provisioner dashboard-metrics-scraper-6ffb444bf9-5nfsz kubernetes-dashboard-855c9754f9-7frml: exit status 1 (55.707214ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-l7vmx" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-5nfsz" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-7frml" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-956139 describe pod coredns-66bc5c9577-l7vmx storage-provisioner dashboard-metrics-scraper-6ffb444bf9-5nfsz kubernetes-dashboard-855c9754f9-7frml: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-956139
helpers_test.go:243: (dbg) docker inspect newest-cni-956139:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9939767f1de878098e5310b942eb9d11ec3f25e3b9ab60bfa602e352f92c64c3",
	        "Created": "2025-11-19T02:44:35.029315719Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 339702,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T02:45:01.314107768Z",
	            "FinishedAt": "2025-11-19T02:45:00.429678243Z"
	        },
	        "Image": "sha256:a368e3d71517ce17114afb6c9921965419df972dd0e2d32a9973a8946f0910a3",
	        "ResolvConfPath": "/var/lib/docker/containers/9939767f1de878098e5310b942eb9d11ec3f25e3b9ab60bfa602e352f92c64c3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9939767f1de878098e5310b942eb9d11ec3f25e3b9ab60bfa602e352f92c64c3/hostname",
	        "HostsPath": "/var/lib/docker/containers/9939767f1de878098e5310b942eb9d11ec3f25e3b9ab60bfa602e352f92c64c3/hosts",
	        "LogPath": "/var/lib/docker/containers/9939767f1de878098e5310b942eb9d11ec3f25e3b9ab60bfa602e352f92c64c3/9939767f1de878098e5310b942eb9d11ec3f25e3b9ab60bfa602e352f92c64c3-json.log",
	        "Name": "/newest-cni-956139",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-956139:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-956139",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9939767f1de878098e5310b942eb9d11ec3f25e3b9ab60bfa602e352f92c64c3",
	                "LowerDir": "/var/lib/docker/overlay2/e8113759fe0dc1846824f71b5017071cba92b91383000cb8145ce591dacbc603-init/diff:/var/lib/docker/overlay2/10dcbd04dd53463a9d8dc947ef22df9efa1b3a4d07707317c3869fa39d5b22a7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e8113759fe0dc1846824f71b5017071cba92b91383000cb8145ce591dacbc603/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e8113759fe0dc1846824f71b5017071cba92b91383000cb8145ce591dacbc603/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e8113759fe0dc1846824f71b5017071cba92b91383000cb8145ce591dacbc603/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-956139",
	                "Source": "/var/lib/docker/volumes/newest-cni-956139/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-956139",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-956139",
	                "name.minikube.sigs.k8s.io": "newest-cni-956139",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "c198b1f73e620dffd762b99390015e6f0324225742fb53a1e3570ad46c4c520a",
	            "SandboxKey": "/var/run/docker/netns/c198b1f73e62",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33133"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33134"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33137"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33135"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33136"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-956139": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c158809bc17a4fb99da40a7d719b98cd3e7fa529cc77ba53af7dfac4ad266e67",
	                    "EndpointID": "438f20f6cbc2ba15db06c896c725d201e146079b30f3f993009c3a5d429feac4",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "6a:0d:3d:6e:29:65",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-956139",
	                        "9939767f1de8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-956139 -n newest-cni-956139
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-956139 -n newest-cni-956139: exit status 2 (303.875106ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-956139 logs -n 25
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p default-k8s-diff-port-167150 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-167150 │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:44 UTC │
	│ addons  │ enable dashboard -p no-preload-837474 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-837474            │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:43 UTC │
	│ start   │ -p no-preload-837474 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-837474            │ jenkins │ v1.37.0 │ 19 Nov 25 02:43 UTC │ 19 Nov 25 02:44 UTC │
	│ image   │ old-k8s-version-987573 image list --format=json                                                                                                                                                                                               │ old-k8s-version-987573       │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │ 19 Nov 25 02:44 UTC │
	│ pause   │ -p old-k8s-version-987573 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-987573       │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │                     │
	│ delete  │ -p old-k8s-version-987573                                                                                                                                                                                                                     │ old-k8s-version-987573       │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │ 19 Nov 25 02:44 UTC │
	│ delete  │ -p old-k8s-version-987573                                                                                                                                                                                                                     │ old-k8s-version-987573       │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │ 19 Nov 25 02:44 UTC │
	│ start   │ -p newest-cni-956139 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-956139            │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │ 19 Nov 25 02:44 UTC │
	│ image   │ embed-certs-811173 image list --format=json                                                                                                                                                                                                   │ embed-certs-811173           │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │ 19 Nov 25 02:44 UTC │
	│ pause   │ -p embed-certs-811173 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-811173           │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │                     │
	│ delete  │ -p embed-certs-811173                                                                                                                                                                                                                         │ embed-certs-811173           │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │ 19 Nov 25 02:44 UTC │
	│ image   │ default-k8s-diff-port-167150 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-167150 │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │ 19 Nov 25 02:44 UTC │
	│ pause   │ -p default-k8s-diff-port-167150 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-167150 │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-956139 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-956139            │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │                     │
	│ delete  │ -p embed-certs-811173                                                                                                                                                                                                                         │ embed-certs-811173           │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │ 19 Nov 25 02:44 UTC │
	│ stop    │ -p newest-cni-956139 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-956139            │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │ 19 Nov 25 02:45 UTC │
	│ image   │ no-preload-837474 image list --format=json                                                                                                                                                                                                    │ no-preload-837474            │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │ 19 Nov 25 02:44 UTC │
	│ pause   │ -p no-preload-837474 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-837474            │ jenkins │ v1.37.0 │ 19 Nov 25 02:44 UTC │                     │
	│ addons  │ enable dashboard -p newest-cni-956139 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-956139            │ jenkins │ v1.37.0 │ 19 Nov 25 02:45 UTC │ 19 Nov 25 02:45 UTC │
	│ start   │ -p newest-cni-956139 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-956139            │ jenkins │ v1.37.0 │ 19 Nov 25 02:45 UTC │ 19 Nov 25 02:45 UTC │
	│ delete  │ -p default-k8s-diff-port-167150                                                                                                                                                                                                               │ default-k8s-diff-port-167150 │ jenkins │ v1.37.0 │ 19 Nov 25 02:45 UTC │ 19 Nov 25 02:45 UTC │
	│ delete  │ -p no-preload-837474                                                                                                                                                                                                                          │ no-preload-837474            │ jenkins │ v1.37.0 │ 19 Nov 25 02:45 UTC │ 19 Nov 25 02:45 UTC │
	│ delete  │ -p no-preload-837474                                                                                                                                                                                                                          │ no-preload-837474            │ jenkins │ v1.37.0 │ 19 Nov 25 02:45 UTC │ 19 Nov 25 02:45 UTC │
	│ image   │ newest-cni-956139 image list --format=json                                                                                                                                                                                                    │ newest-cni-956139            │ jenkins │ v1.37.0 │ 19 Nov 25 02:45 UTC │ 19 Nov 25 02:45 UTC │
	│ pause   │ -p newest-cni-956139 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-956139            │ jenkins │ v1.37.0 │ 19 Nov 25 02:45 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 02:45:01
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 02:45:01.079633  339411 out.go:360] Setting OutFile to fd 1 ...
	I1119 02:45:01.079896  339411 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:45:01.079905  339411 out.go:374] Setting ErrFile to fd 2...
	I1119 02:45:01.079910  339411 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:45:01.080082  339411 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-11126/.minikube/bin
	I1119 02:45:01.080519  339411 out.go:368] Setting JSON to false
	I1119 02:45:01.081542  339411 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5248,"bootTime":1763515053,"procs":261,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 02:45:01.081600  339411 start.go:143] virtualization: kvm guest
	I1119 02:45:01.083571  339411 out.go:179] * [newest-cni-956139] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1119 02:45:01.087840  339411 notify.go:221] Checking for updates...
	I1119 02:45:01.089172  339411 out.go:179]   - MINIKUBE_LOCATION=21924
	I1119 02:45:01.090249  339411 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 02:45:01.091223  339411 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21924-11126/kubeconfig
	I1119 02:45:01.092319  339411 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-11126/.minikube
	I1119 02:45:01.093387  339411 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1119 02:45:01.094361  339411 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 02:45:01.095805  339411 config.go:182] Loaded profile config "newest-cni-956139": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:45:01.096303  339411 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 02:45:01.118824  339411 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1119 02:45:01.118911  339411 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 02:45:01.181407  339411 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:false NGoroutines:68 SystemTime:2025-11-19 02:45:01.171333163 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652060160 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 02:45:01.181546  339411 docker.go:319] overlay module found
	I1119 02:45:01.183364  339411 out.go:179] * Using the docker driver based on existing profile
	I1119 02:45:01.184618  339411 start.go:309] selected driver: docker
	I1119 02:45:01.184633  339411 start.go:930] validating driver "docker" against &{Name:newest-cni-956139 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-956139 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:45:01.184704  339411 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 02:45:01.185227  339411 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 02:45:01.247035  339411 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:43 OomKillDisable:false NGoroutines:62 SystemTime:2025-11-19 02:45:01.237379667 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652060160 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 02:45:01.247406  339411 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1119 02:45:01.247459  339411 cni.go:84] Creating CNI manager for ""
	I1119 02:45:01.247540  339411 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 02:45:01.247593  339411 start.go:353] cluster config:
	{Name:newest-cni-956139 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-956139 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:45:01.250071  339411 out.go:179] * Starting "newest-cni-956139" primary control-plane node in "newest-cni-956139" cluster
	I1119 02:45:01.251167  339411 cache.go:134] Beginning downloading kic base image for docker with crio
	I1119 02:45:01.252290  339411 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1119 02:45:01.253285  339411 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 02:45:01.253315  339411 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21924-11126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1119 02:45:01.253324  339411 cache.go:65] Caching tarball of preloaded images
	I1119 02:45:01.253382  339411 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1119 02:45:01.253423  339411 preload.go:238] Found /home/jenkins/minikube-integration/21924-11126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1119 02:45:01.253478  339411 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1119 02:45:01.253611  339411 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/config.json ...
	I1119 02:45:01.272884  339411 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1119 02:45:01.272903  339411 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1119 02:45:01.272916  339411 cache.go:243] Successfully downloaded all kic artifacts
	I1119 02:45:01.272936  339411 start.go:360] acquireMachinesLock for newest-cni-956139: {Name:mk15a132b2574a22e8a886ba5601ed901f63d00c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 02:45:01.272994  339411 start.go:364] duration metric: took 34.51µs to acquireMachinesLock for "newest-cni-956139"
	I1119 02:45:01.273009  339411 start.go:96] Skipping create...Using existing machine configuration
	I1119 02:45:01.273016  339411 fix.go:54] fixHost starting: 
	I1119 02:45:01.273196  339411 cli_runner.go:164] Run: docker container inspect newest-cni-956139 --format={{.State.Status}}
	I1119 02:45:01.289647  339411 fix.go:112] recreateIfNeeded on newest-cni-956139: state=Stopped err=<nil>
	W1119 02:45:01.289674  339411 fix.go:138] unexpected machine state, will restart: <nil>
	I1119 02:45:01.291301  339411 out.go:252] * Restarting existing docker container for "newest-cni-956139" ...
	I1119 02:45:01.291349  339411 cli_runner.go:164] Run: docker start newest-cni-956139
	I1119 02:45:01.578382  339411 cli_runner.go:164] Run: docker container inspect newest-cni-956139 --format={{.State.Status}}
	I1119 02:45:01.603013  339411 kic.go:430] container "newest-cni-956139" state is running.
	I1119 02:45:01.603482  339411 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-956139
	I1119 02:45:01.626380  339411 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/config.json ...
	I1119 02:45:01.626664  339411 machine.go:94] provisionDockerMachine start ...
	I1119 02:45:01.626750  339411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956139
	I1119 02:45:01.649062  339411 main.go:143] libmachine: Using SSH client type: native
	I1119 02:45:01.649329  339411 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1119 02:45:01.649448  339411 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 02:45:01.650180  339411 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59880->127.0.0.1:33133: read: connection reset by peer
	I1119 02:45:04.782405  339411 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-956139
	
	I1119 02:45:04.782453  339411 ubuntu.go:182] provisioning hostname "newest-cni-956139"
	I1119 02:45:04.782524  339411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956139
	I1119 02:45:04.801569  339411 main.go:143] libmachine: Using SSH client type: native
	I1119 02:45:04.801918  339411 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1119 02:45:04.801940  339411 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-956139 && echo "newest-cni-956139" | sudo tee /etc/hostname
	I1119 02:45:04.940427  339411 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-956139
	
	I1119 02:45:04.940521  339411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956139
	I1119 02:45:04.961142  339411 main.go:143] libmachine: Using SSH client type: native
	I1119 02:45:04.961461  339411 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1119 02:45:04.961492  339411 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-956139' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-956139/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-956139' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 02:45:05.092494  339411 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 02:45:05.092522  339411 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21924-11126/.minikube CaCertPath:/home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21924-11126/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21924-11126/.minikube}
	I1119 02:45:05.092564  339411 ubuntu.go:190] setting up certificates
	I1119 02:45:05.092573  339411 provision.go:84] configureAuth start
	I1119 02:45:05.092618  339411 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-956139
	I1119 02:45:05.109653  339411 provision.go:143] copyHostCerts
	I1119 02:45:05.109722  339411 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-11126/.minikube/ca.pem, removing ...
	I1119 02:45:05.109740  339411 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-11126/.minikube/ca.pem
	I1119 02:45:05.109816  339411 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21924-11126/.minikube/ca.pem (1082 bytes)
	I1119 02:45:05.109943  339411 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-11126/.minikube/cert.pem, removing ...
	I1119 02:45:05.109955  339411 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-11126/.minikube/cert.pem
	I1119 02:45:05.110000  339411 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21924-11126/.minikube/cert.pem (1123 bytes)
	I1119 02:45:05.110099  339411 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-11126/.minikube/key.pem, removing ...
	I1119 02:45:05.110110  339411 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-11126/.minikube/key.pem
	I1119 02:45:05.110149  339411 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21924-11126/.minikube/key.pem (1675 bytes)
	I1119 02:45:05.110223  339411 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21924-11126/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca-key.pem org=jenkins.newest-cni-956139 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-956139]
	I1119 02:45:06.040181  339411 provision.go:177] copyRemoteCerts
	I1119 02:45:06.040250  339411 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 02:45:06.040309  339411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956139
	I1119 02:45:06.057778  339411 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/newest-cni-956139/id_rsa Username:docker}
	I1119 02:45:06.150670  339411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1119 02:45:06.166823  339411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1119 02:45:06.182773  339411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1119 02:45:06.198548  339411 provision.go:87] duration metric: took 1.105962937s to configureAuth
	I1119 02:45:06.198571  339411 ubuntu.go:206] setting minikube options for container-runtime
	I1119 02:45:06.198736  339411 config.go:182] Loaded profile config "newest-cni-956139": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:45:06.198844  339411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956139
	I1119 02:45:06.217632  339411 main.go:143] libmachine: Using SSH client type: native
	I1119 02:45:06.217823  339411 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1119 02:45:06.217842  339411 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 02:45:06.491606  339411 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 02:45:06.491634  339411 machine.go:97] duration metric: took 4.86494951s to provisionDockerMachine
	I1119 02:45:06.491646  339411 start.go:293] postStartSetup for "newest-cni-956139" (driver="docker")
	I1119 02:45:06.491657  339411 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 02:45:06.491713  339411 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 02:45:06.491758  339411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956139
	I1119 02:45:06.509766  339411 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/newest-cni-956139/id_rsa Username:docker}
	I1119 02:45:06.602847  339411 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 02:45:06.605984  339411 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 02:45:06.606017  339411 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 02:45:06.606028  339411 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-11126/.minikube/addons for local assets ...
	I1119 02:45:06.606079  339411 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-11126/.minikube/files for local assets ...
	I1119 02:45:06.606155  339411 filesync.go:149] local asset: /home/jenkins/minikube-integration/21924-11126/.minikube/files/etc/ssl/certs/146342.pem -> 146342.pem in /etc/ssl/certs
	I1119 02:45:06.606236  339411 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 02:45:06.613102  339411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/files/etc/ssl/certs/146342.pem --> /etc/ssl/certs/146342.pem (1708 bytes)
	I1119 02:45:06.629391  339411 start.go:296] duration metric: took 137.732659ms for postStartSetup
	I1119 02:45:06.629476  339411 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 02:45:06.629521  339411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956139
	I1119 02:45:06.647458  339411 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/newest-cni-956139/id_rsa Username:docker}
	I1119 02:45:06.737831  339411 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 02:45:06.742173  339411 fix.go:56] duration metric: took 5.469149933s for fixHost
	I1119 02:45:06.742200  339411 start.go:83] releasing machines lock for "newest-cni-956139", held for 5.469195147s
	I1119 02:45:06.742256  339411 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-956139
	I1119 02:45:06.760619  339411 ssh_runner.go:195] Run: cat /version.json
	I1119 02:45:06.760661  339411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956139
	I1119 02:45:06.760732  339411 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 02:45:06.760783  339411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956139
	I1119 02:45:06.780106  339411 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/newest-cni-956139/id_rsa Username:docker}
	I1119 02:45:06.780941  339411 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/newest-cni-956139/id_rsa Username:docker}
	I1119 02:45:06.874379  339411 ssh_runner.go:195] Run: systemctl --version
	I1119 02:45:06.931706  339411 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 02:45:06.965640  339411 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 02:45:06.970209  339411 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 02:45:06.970271  339411 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 02:45:06.977657  339411 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1119 02:45:06.977675  339411 start.go:496] detecting cgroup driver to use...
	I1119 02:45:06.977699  339411 detect.go:190] detected "systemd" cgroup driver on host os
	I1119 02:45:06.977737  339411 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 02:45:06.990698  339411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 02:45:07.001855  339411 docker.go:218] disabling cri-docker service (if available) ...
	I1119 02:45:07.001920  339411 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 02:45:07.015158  339411 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 02:45:07.026200  339411 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 02:45:07.102576  339411 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 02:45:07.183456  339411 docker.go:234] disabling docker service ...
	I1119 02:45:07.183514  339411 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 02:45:07.198832  339411 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 02:45:07.211763  339411 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 02:45:07.293050  339411 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 02:45:07.373337  339411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 02:45:07.385485  339411 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 02:45:07.398486  339411 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 02:45:07.398535  339411 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:45:07.406473  339411 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1119 02:45:07.406531  339411 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:45:07.414471  339411 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:45:07.422357  339411 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:45:07.430343  339411 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 02:45:07.437857  339411 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:45:07.446164  339411 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:45:07.453877  339411 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:45:07.462010  339411 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 02:45:07.468849  339411 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 02:45:07.475479  339411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:45:07.551854  339411 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 02:45:07.721169  339411 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 02:45:07.721235  339411 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 02:45:07.726527  339411 start.go:564] Will wait 60s for crictl version
	I1119 02:45:07.726592  339411 ssh_runner.go:195] Run: which crictl
	I1119 02:45:07.729907  339411 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 02:45:07.752462  339411 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1119 02:45:07.752534  339411 ssh_runner.go:195] Run: crio --version
	I1119 02:45:07.779996  339411 ssh_runner.go:195] Run: crio --version
	I1119 02:45:07.814427  339411 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1119 02:45:07.816483  339411 cli_runner.go:164] Run: docker network inspect newest-cni-956139 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 02:45:07.835079  339411 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1119 02:45:07.839639  339411 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 02:45:07.853234  339411 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1119 02:45:07.854372  339411 kubeadm.go:884] updating cluster {Name:newest-cni-956139 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-956139 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 02:45:07.854510  339411 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 02:45:07.854558  339411 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 02:45:07.890029  339411 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 02:45:07.890046  339411 crio.go:433] Images already preloaded, skipping extraction
	I1119 02:45:07.890085  339411 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 02:45:07.918515  339411 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 02:45:07.918534  339411 cache_images.go:86] Images are preloaded, skipping loading
	I1119 02:45:07.918541  339411 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1119 02:45:07.918641  339411 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-956139 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-956139 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 02:45:07.918700  339411 ssh_runner.go:195] Run: crio config
	I1119 02:45:07.963287  339411 cni.go:84] Creating CNI manager for ""
	I1119 02:45:07.963313  339411 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 02:45:07.963329  339411 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1119 02:45:07.963359  339411 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-956139 NodeName:newest-cni-956139 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 02:45:07.963541  339411 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-956139"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 02:45:07.963602  339411 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 02:45:07.971811  339411 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 02:45:07.971873  339411 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 02:45:07.979777  339411 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1119 02:45:07.992762  339411 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 02:45:08.005532  339411 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1119 02:45:08.016987  339411 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1119 02:45:08.020448  339411 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 02:45:08.029864  339411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:45:08.117845  339411 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 02:45:08.148863  339411 certs.go:69] Setting up /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139 for IP: 192.168.76.2
	I1119 02:45:08.148882  339411 certs.go:195] generating shared ca certs ...
	I1119 02:45:08.148900  339411 certs.go:227] acquiring lock for ca certs: {Name:mkfa94aafd627ee4c1a185e05aa520339d3c22d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:45:08.149042  339411 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21924-11126/.minikube/ca.key
	I1119 02:45:08.149111  339411 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21924-11126/.minikube/proxy-client-ca.key
	I1119 02:45:08.149130  339411 certs.go:257] generating profile certs ...
	I1119 02:45:08.149245  339411 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/client.key
	I1119 02:45:08.149327  339411 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/apiserver.key.107d1e6d
	I1119 02:45:08.149378  339411 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/proxy-client.key
	I1119 02:45:08.149533  339411 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/14634.pem (1338 bytes)
	W1119 02:45:08.149577  339411 certs.go:480] ignoring /home/jenkins/minikube-integration/21924-11126/.minikube/certs/14634_empty.pem, impossibly tiny 0 bytes
	I1119 02:45:08.149588  339411 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca-key.pem (1671 bytes)
	I1119 02:45:08.149625  339411 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/ca.pem (1082 bytes)
	I1119 02:45:08.149656  339411 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/cert.pem (1123 bytes)
	I1119 02:45:08.149690  339411 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/certs/key.pem (1675 bytes)
	I1119 02:45:08.149750  339411 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11126/.minikube/files/etc/ssl/certs/146342.pem (1708 bytes)
	I1119 02:45:08.150490  339411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 02:45:08.170609  339411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1119 02:45:08.189985  339411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 02:45:08.209416  339411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1119 02:45:08.235375  339411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1119 02:45:08.252887  339411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1119 02:45:08.269062  339411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 02:45:08.284703  339411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/newest-cni-956139/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1119 02:45:08.300158  339411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/files/etc/ssl/certs/146342.pem --> /usr/share/ca-certificates/146342.pem (1708 bytes)
	I1119 02:45:08.315646  339411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 02:45:08.331090  339411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11126/.minikube/certs/14634.pem --> /usr/share/ca-certificates/14634.pem (1338 bytes)
	I1119 02:45:08.347387  339411 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 02:45:08.358597  339411 ssh_runner.go:195] Run: openssl version
	I1119 02:45:08.364174  339411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 02:45:08.371687  339411 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:45:08.374967  339411 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 01:56 /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:45:08.375005  339411 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:45:08.407666  339411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 02:45:08.414564  339411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14634.pem && ln -fs /usr/share/ca-certificates/14634.pem /etc/ssl/certs/14634.pem"
	I1119 02:45:08.422608  339411 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14634.pem
	I1119 02:45:08.426026  339411 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 02:02 /usr/share/ca-certificates/14634.pem
	I1119 02:45:08.426070  339411 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14634.pem
	I1119 02:45:08.458986  339411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14634.pem /etc/ssl/certs/51391683.0"
	I1119 02:45:08.466035  339411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/146342.pem && ln -fs /usr/share/ca-certificates/146342.pem /etc/ssl/certs/146342.pem"
	I1119 02:45:08.473546  339411 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/146342.pem
	I1119 02:45:08.476848  339411 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 02:02 /usr/share/ca-certificates/146342.pem
	I1119 02:45:08.476881  339411 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/146342.pem
	I1119 02:45:08.509809  339411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/146342.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 02:45:08.516871  339411 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 02:45:08.520222  339411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1119 02:45:08.553900  339411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1119 02:45:08.588040  339411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1119 02:45:08.620691  339411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1119 02:45:08.655677  339411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1119 02:45:08.697408  339411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1119 02:45:08.745066  339411 kubeadm.go:401] StartCluster: {Name:newest-cni-956139 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-956139 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:45:08.745174  339411 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 02:45:08.745232  339411 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 02:45:08.783552  339411 cri.go:89] found id: "03bea9a3d49a8d6bd560dac25fbaca2c60e0f475532d71b1f7e8403b3b5770a3"
	I1119 02:45:08.783632  339411 cri.go:89] found id: "003043020d52b3254d0fb8dce6b5de51f9100fc7116a74f3d11a53430cd79dfb"
	I1119 02:45:08.783643  339411 cri.go:89] found id: "2b5bcddd47a712b8ec423213ace360e69321fcb5d2ba522d16fede41bdbecf8f"
	I1119 02:45:08.783647  339411 cri.go:89] found id: "ce178759508ab503068b3436802db13c84f6f6f0a22e84e8542a3a203fc6cbba"
	I1119 02:45:08.783652  339411 cri.go:89] found id: ""
	I1119 02:45:08.783697  339411 ssh_runner.go:195] Run: sudo runc list -f json
	W1119 02:45:08.797371  339411 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:45:08Z" level=error msg="open /run/runc: no such file or directory"
	I1119 02:45:08.797509  339411 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 02:45:08.807910  339411 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1119 02:45:08.807932  339411 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1119 02:45:08.807982  339411 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1119 02:45:08.817546  339411 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1119 02:45:08.818026  339411 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-956139" does not appear in /home/jenkins/minikube-integration/21924-11126/kubeconfig
	I1119 02:45:08.818196  339411 kubeconfig.go:62] /home/jenkins/minikube-integration/21924-11126/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-956139" cluster setting kubeconfig missing "newest-cni-956139" context setting]
	I1119 02:45:08.818679  339411 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/kubeconfig: {Name:mkb0d0ec188d51add3fa67ae624c9c11581068d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:45:08.820195  339411 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1119 02:45:08.827993  339411 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1119 02:45:08.828024  339411 kubeadm.go:602] duration metric: took 20.086273ms to restartPrimaryControlPlane
	I1119 02:45:08.828039  339411 kubeadm.go:403] duration metric: took 82.993201ms to StartCluster
	I1119 02:45:08.828051  339411 settings.go:142] acquiring lock: {Name:mk39bd61177f2f8808b442f427cccc3032092975 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:45:08.828096  339411 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21924-11126/kubeconfig
	I1119 02:45:08.828568  339411 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/kubeconfig: {Name:mkb0d0ec188d51add3fa67ae624c9c11581068d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:45:08.828741  339411 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 02:45:08.828815  339411 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 02:45:08.828916  339411 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-956139"
	I1119 02:45:08.828936  339411 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-956139"
	I1119 02:45:08.828936  339411 addons.go:70] Setting dashboard=true in profile "newest-cni-956139"
	I1119 02:45:08.828962  339411 addons.go:239] Setting addon dashboard=true in "newest-cni-956139"
	W1119 02:45:08.828969  339411 addons.go:248] addon dashboard should already be in state true
	I1119 02:45:08.828970  339411 addons.go:70] Setting default-storageclass=true in profile "newest-cni-956139"
	I1119 02:45:08.828995  339411 config.go:182] Loaded profile config "newest-cni-956139": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:45:08.829002  339411 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-956139"
	I1119 02:45:08.829000  339411 host.go:66] Checking if "newest-cni-956139" exists ...
	W1119 02:45:08.828944  339411 addons.go:248] addon storage-provisioner should already be in state true
	I1119 02:45:08.829164  339411 host.go:66] Checking if "newest-cni-956139" exists ...
	I1119 02:45:08.829358  339411 cli_runner.go:164] Run: docker container inspect newest-cni-956139 --format={{.State.Status}}
	I1119 02:45:08.829562  339411 cli_runner.go:164] Run: docker container inspect newest-cni-956139 --format={{.State.Status}}
	I1119 02:45:08.829627  339411 cli_runner.go:164] Run: docker container inspect newest-cni-956139 --format={{.State.Status}}
	I1119 02:45:08.831644  339411 out.go:179] * Verifying Kubernetes components...
	I1119 02:45:08.832960  339411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:45:08.853677  339411 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 02:45:08.853727  339411 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1119 02:45:08.853803  339411 addons.go:239] Setting addon default-storageclass=true in "newest-cni-956139"
	W1119 02:45:08.853823  339411 addons.go:248] addon default-storageclass should already be in state true
	I1119 02:45:08.853844  339411 host.go:66] Checking if "newest-cni-956139" exists ...
	I1119 02:45:08.854243  339411 cli_runner.go:164] Run: docker container inspect newest-cni-956139 --format={{.State.Status}}
	I1119 02:45:08.854941  339411 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 02:45:08.854959  339411 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 02:45:08.855005  339411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956139
	I1119 02:45:08.856084  339411 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1119 02:45:08.857243  339411 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1119 02:45:08.857261  339411 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1119 02:45:08.857316  339411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956139
	I1119 02:45:08.886835  339411 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 02:45:08.886863  339411 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 02:45:08.886928  339411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956139
	I1119 02:45:08.887596  339411 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/newest-cni-956139/id_rsa Username:docker}
	I1119 02:45:08.890367  339411 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/newest-cni-956139/id_rsa Username:docker}
	I1119 02:45:08.910379  339411 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/newest-cni-956139/id_rsa Username:docker}
	I1119 02:45:08.965967  339411 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 02:45:08.978553  339411 api_server.go:52] waiting for apiserver process to appear ...
	I1119 02:45:08.978624  339411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 02:45:08.991834  339411 api_server.go:72] duration metric: took 163.071285ms to wait for apiserver process to appear ...
	I1119 02:45:08.991855  339411 api_server.go:88] waiting for apiserver healthz status ...
	I1119 02:45:08.991872  339411 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 02:45:08.997704  339411 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 02:45:09.001252  339411 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1119 02:45:09.001268  339411 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1119 02:45:09.014955  339411 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1119 02:45:09.014976  339411 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1119 02:45:09.017866  339411 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 02:45:09.029699  339411 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1119 02:45:09.029720  339411 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1119 02:45:09.045904  339411 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1119 02:45:09.045922  339411 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1119 02:45:09.062840  339411 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1119 02:45:09.062864  339411 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1119 02:45:09.079275  339411 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1119 02:45:09.079294  339411 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1119 02:45:09.091631  339411 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1119 02:45:09.091648  339411 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1119 02:45:09.103051  339411 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1119 02:45:09.103068  339411 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1119 02:45:09.115108  339411 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1119 02:45:09.115124  339411 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1119 02:45:09.128183  339411 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1119 02:45:10.370345  339411 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1119 02:45:10.370374  339411 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1119 02:45:10.370389  339411 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 02:45:10.425600  339411 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W1119 02:45:10.425629  339411 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I1119 02:45:10.492610  339411 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 02:45:10.497517  339411 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 02:45:10.497542  339411 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 02:45:10.911644  339411 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.913909438s)
	I1119 02:45:10.911711  339411 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.893787367s)
	I1119 02:45:10.911822  339411 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.783600631s)
	I1119 02:45:10.913676  339411 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-956139 addons enable metrics-server
	
	I1119 02:45:10.921887  339411 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1119 02:45:10.923112  339411 addons.go:515] duration metric: took 2.094309129s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1119 02:45:10.992869  339411 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 02:45:10.996479  339411 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 02:45:10.996506  339411 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 02:45:11.492151  339411 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 02:45:11.496415  339411 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 02:45:11.496456  339411 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 02:45:11.991985  339411 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 02:45:11.996575  339411 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1119 02:45:11.997533  339411 api_server.go:141] control plane version: v1.34.1
	I1119 02:45:11.997564  339411 api_server.go:131] duration metric: took 3.005701105s to wait for apiserver health ...
	I1119 02:45:11.997575  339411 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 02:45:12.001155  339411 system_pods.go:59] 8 kube-system pods found
	I1119 02:45:12.001191  339411 system_pods.go:61] "coredns-66bc5c9577-l7vmx" [0d704d05-424c-4c54-bdf6-a5ec01cbcbf8] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1119 02:45:12.001201  339411 system_pods.go:61] "etcd-newest-cni-956139" [724e0280-bcab-4c1e-aae3-5a7a72519d23] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1119 02:45:12.001216  339411 system_pods.go:61] "kindnet-s65nc" [20583cba-5129-470f-b6f9-869642b28f93] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1119 02:45:12.001228  339411 system_pods.go:61] "kube-apiserver-newest-cni-956139" [a81fa4fa-fea5-4996-9230-94e06fb3b276] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1119 02:45:12.001240  339411 system_pods.go:61] "kube-controller-manager-newest-cni-956139" [a93f6b9a-946c-4099-bbc0-139db17304e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1119 02:45:12.001251  339411 system_pods.go:61] "kube-proxy-7frpm" [7f447bc0-73e5-4008-b474-551b69553ce3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1119 02:45:12.001256  339411 system_pods.go:61] "kube-scheduler-newest-cni-956139" [ebd7110b-7108-4bca-b86d-c7126087da9f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1119 02:45:12.001263  339411 system_pods.go:61] "storage-provisioner" [b8a81262-3433-4dd4-a802-58a9b4440545] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1119 02:45:12.001270  339411 system_pods.go:74] duration metric: took 3.688105ms to wait for pod list to return data ...
	I1119 02:45:12.001283  339411 default_sa.go:34] waiting for default service account to be created ...
	I1119 02:45:12.003642  339411 default_sa.go:45] found service account: "default"
	I1119 02:45:12.003658  339411 default_sa.go:55] duration metric: took 2.369724ms for default service account to be created ...
	I1119 02:45:12.003668  339411 kubeadm.go:587] duration metric: took 3.174909379s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1119 02:45:12.003688  339411 node_conditions.go:102] verifying NodePressure condition ...
	I1119 02:45:12.006040  339411 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1119 02:45:12.006067  339411 node_conditions.go:123] node cpu capacity is 8
	I1119 02:45:12.006084  339411 node_conditions.go:105] duration metric: took 2.39132ms to run NodePressure ...
	I1119 02:45:12.006099  339411 start.go:242] waiting for startup goroutines ...
	I1119 02:45:12.006109  339411 start.go:247] waiting for cluster config update ...
	I1119 02:45:12.006125  339411 start.go:256] writing updated cluster config ...
	I1119 02:45:12.006455  339411 ssh_runner.go:195] Run: rm -f paused
	I1119 02:45:12.057363  339411 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1119 02:45:12.060472  339411 out.go:179] * Done! kubectl is now configured to use "newest-cni-956139" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 19 02:45:11 newest-cni-956139 crio[517]: time="2025-11-19T02:45:11.522806954Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:45:11 newest-cni-956139 crio[517]: time="2025-11-19T02:45:11.527929023Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=af712ab4-eeb5-40f2-91a3-8e0352ce80ea name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 02:45:11 newest-cni-956139 crio[517]: time="2025-11-19T02:45:11.528399423Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=c5e37559-18b3-4add-8015-220b18d3a83b name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 02:45:11 newest-cni-956139 crio[517]: time="2025-11-19T02:45:11.529625275Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 19 02:45:11 newest-cni-956139 crio[517]: time="2025-11-19T02:45:11.530189928Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 19 02:45:11 newest-cni-956139 crio[517]: time="2025-11-19T02:45:11.530604893Z" level=info msg="Ran pod sandbox c3e9bd684e0f4174936282dd00c75c393907f1dc65241c3d174b34677b0f6848 with infra container: kube-system/kindnet-s65nc/POD" id=af712ab4-eeb5-40f2-91a3-8e0352ce80ea name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 02:45:11 newest-cni-956139 crio[517]: time="2025-11-19T02:45:11.531038008Z" level=info msg="Ran pod sandbox a3e1c159ae94a252481b5b61b27ca32a579a37450eed5994230e58c02f765c06 with infra container: kube-system/kube-proxy-7frpm/POD" id=c5e37559-18b3-4add-8015-220b18d3a83b name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 02:45:11 newest-cni-956139 crio[517]: time="2025-11-19T02:45:11.531662364Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=e6b15355-a506-46b9-a78a-6df11b3e6dc8 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 02:45:11 newest-cni-956139 crio[517]: time="2025-11-19T02:45:11.532107215Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=4f214618-8d2c-43be-809a-ca6b8e4ff142 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 02:45:11 newest-cni-956139 crio[517]: time="2025-11-19T02:45:11.532526183Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=6eae5b18-4bb1-45dd-ba98-36741dc809a2 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 02:45:11 newest-cni-956139 crio[517]: time="2025-11-19T02:45:11.533014891Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=fa400924-2868-469f-8924-2743ddfe2669 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 02:45:11 newest-cni-956139 crio[517]: time="2025-11-19T02:45:11.533651887Z" level=info msg="Creating container: kube-system/kindnet-s65nc/kindnet-cni" id=8df5d4b2-ef5b-4f0b-827c-7b91f408e669 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 02:45:11 newest-cni-956139 crio[517]: time="2025-11-19T02:45:11.533747497Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:45:11 newest-cni-956139 crio[517]: time="2025-11-19T02:45:11.53386648Z" level=info msg="Creating container: kube-system/kube-proxy-7frpm/kube-proxy" id=d6e24787-5ff5-42ca-b141-9b5b703f6c46 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 02:45:11 newest-cni-956139 crio[517]: time="2025-11-19T02:45:11.533981872Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:45:11 newest-cni-956139 crio[517]: time="2025-11-19T02:45:11.538586125Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:45:11 newest-cni-956139 crio[517]: time="2025-11-19T02:45:11.539064134Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:45:11 newest-cni-956139 crio[517]: time="2025-11-19T02:45:11.540487588Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:45:11 newest-cni-956139 crio[517]: time="2025-11-19T02:45:11.540904743Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:45:11 newest-cni-956139 crio[517]: time="2025-11-19T02:45:11.56645761Z" level=info msg="Created container b3b0fe7af03c0bad361d36485f4c69783f18623cd9de030982bf08f865f3fad4: kube-system/kindnet-s65nc/kindnet-cni" id=8df5d4b2-ef5b-4f0b-827c-7b91f408e669 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 02:45:11 newest-cni-956139 crio[517]: time="2025-11-19T02:45:11.567079889Z" level=info msg="Starting container: b3b0fe7af03c0bad361d36485f4c69783f18623cd9de030982bf08f865f3fad4" id=52283d34-7592-4a8b-8220-21bbc0b61e8b name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 02:45:11 newest-cni-956139 crio[517]: time="2025-11-19T02:45:11.569256999Z" level=info msg="Started container" PID=1047 containerID=b3b0fe7af03c0bad361d36485f4c69783f18623cd9de030982bf08f865f3fad4 description=kube-system/kindnet-s65nc/kindnet-cni id=52283d34-7592-4a8b-8220-21bbc0b61e8b name=/runtime.v1.RuntimeService/StartContainer sandboxID=c3e9bd684e0f4174936282dd00c75c393907f1dc65241c3d174b34677b0f6848
	Nov 19 02:45:11 newest-cni-956139 crio[517]: time="2025-11-19T02:45:11.569753395Z" level=info msg="Created container 036efdabb6a0eeacfb0cb5fd41704314387fa6013fc124eb80b7258169f8c320: kube-system/kube-proxy-7frpm/kube-proxy" id=d6e24787-5ff5-42ca-b141-9b5b703f6c46 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 02:45:11 newest-cni-956139 crio[517]: time="2025-11-19T02:45:11.57029954Z" level=info msg="Starting container: 036efdabb6a0eeacfb0cb5fd41704314387fa6013fc124eb80b7258169f8c320" id=8ef46e4d-c348-464a-9427-b01ad2f333fe name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 02:45:11 newest-cni-956139 crio[517]: time="2025-11-19T02:45:11.573184523Z" level=info msg="Started container" PID=1048 containerID=036efdabb6a0eeacfb0cb5fd41704314387fa6013fc124eb80b7258169f8c320 description=kube-system/kube-proxy-7frpm/kube-proxy id=8ef46e4d-c348-464a-9427-b01ad2f333fe name=/runtime.v1.RuntimeService/StartContainer sandboxID=a3e1c159ae94a252481b5b61b27ca32a579a37450eed5994230e58c02f765c06
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	036efdabb6a0e       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   5 seconds ago       Running             kube-proxy                1                   a3e1c159ae94a       kube-proxy-7frpm                            kube-system
	b3b0fe7af03c0       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   5 seconds ago       Running             kindnet-cni               1                   c3e9bd684e0f4       kindnet-s65nc                               kube-system
	03bea9a3d49a8       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   8 seconds ago       Running             kube-scheduler            1                   9f84555372005       kube-scheduler-newest-cni-956139            kube-system
	003043020d52b       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   8 seconds ago       Running             kube-apiserver            1                   f8aad44e1040f       kube-apiserver-newest-cni-956139            kube-system
	2b5bcddd47a71       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   8 seconds ago       Running             kube-controller-manager   1                   9e438524c0add       kube-controller-manager-newest-cni-956139   kube-system
	ce178759508ab       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   8 seconds ago       Running             etcd                      1                   cac2271a6b593       etcd-newest-cni-956139                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-956139
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-956139
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ebd49efe8b7d44f89fc96e21beffcd64dfee8277
	                    minikube.k8s.io/name=newest-cni-956139
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T02_44_50_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 02:44:46 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-956139
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 02:45:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 02:45:10 +0000   Wed, 19 Nov 2025 02:44:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 02:45:10 +0000   Wed, 19 Nov 2025 02:44:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 02:45:10 +0000   Wed, 19 Nov 2025 02:44:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Wed, 19 Nov 2025 02:45:10 +0000   Wed, 19 Nov 2025 02:44:45 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-956139
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863340Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863340Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf10fb2f940d419c1d138723691cfee8
	  System UUID:                cae1bba0-7daf-47af-a2b2-8c3f8909ef7d
	  Boot ID:                    b74e7f3d-91af-4c1b-82f6-1745dfd2e678
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-956139                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         28s
	  kube-system                 kindnet-s65nc                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      23s
	  kube-system                 kube-apiserver-newest-cni-956139             250m (3%)     0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-controller-manager-newest-cni-956139    200m (2%)     0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-proxy-7frpm                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	  kube-system                 kube-scheduler-newest-cni-956139             100m (1%)     0 (0%)      0 (0%)           0 (0%)         28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age              From             Message
	  ----    ------                   ----             ----             -------
	  Normal  Starting                 22s              kube-proxy       
	  Normal  Starting                 5s               kube-proxy       
	  Normal  Starting                 28s              kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  28s              kubelet          Node newest-cni-956139 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28s              kubelet          Node newest-cni-956139 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28s              kubelet          Node newest-cni-956139 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           24s              node-controller  Node newest-cni-956139 event: Registered Node newest-cni-956139 in Controller
	  Normal  Starting                 9s               kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9s (x8 over 9s)  kubelet          Node newest-cni-956139 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9s (x8 over 9s)  kubelet          Node newest-cni-956139 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9s (x8 over 9s)  kubelet          Node newest-cni-956139 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4s               node-controller  Node newest-cni-956139 event: Registered Node newest-cni-956139 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: c6 2a 10 38 98 ce 7e af 25 93 50 8b 08 00
	[Nov19 02:40] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 69 19 13 d2 34 08 06
	[  +0.000303] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff de 82 c7 57 ef 49 08 06
	[Nov19 02:41] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 82 b6 eb cd 7e a7 08 06
	[  +0.001170] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 6a 20 a4 3b 82 10 08 06
	[ +12.842438] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 1e 35 97 65 5e 2e 08 06
	[  +4.187285] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ba 20 d4 3c 25 0c 08 06
	[ +19.742639] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6e e8 d1 08 45 d2 08 06
	[  +0.000327] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ba 20 d4 3c 25 0c 08 06
	[Nov19 02:42] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 2b 58 8a 05 dc 08 06
	[  +0.000340] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 b6 eb cd 7e a7 08 06
	[ +10.661146] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b6 1d bb 8d c6 48 08 06
	[  +0.000382] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 1e 35 97 65 5e 2e 08 06
	
	
	==> etcd [ce178759508ab503068b3436802db13c84f6f6f0a22e84e8542a3a203fc6cbba] <==
	{"level":"warn","ts":"2025-11-19T02:45:09.774651Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:45:09.781287Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:45:09.788143Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59922","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:45:09.797753Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:45:09.811964Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:45:09.818864Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:45:09.825220Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:45:09.832221Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:45:09.839795Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:45:09.846116Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:45:09.854566Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:45:09.860541Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:45:09.866522Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:45:09.873579Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:45:09.880366Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60098","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:45:09.886242Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:45:09.892553Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:45:09.898609Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:45:09.904309Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:45:09.910443Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:45:09.916374Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:45:09.922356Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:45:09.937535Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:45:09.943491Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:45:09.952604Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60276","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 02:45:17 up  1:27,  0 user,  load average: 2.99, 3.23, 2.29
	Linux newest-cni-956139 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b3b0fe7af03c0bad361d36485f4c69783f18623cd9de030982bf08f865f3fad4] <==
	I1119 02:45:11.748402       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 02:45:11.748641       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1119 02:45:11.748762       1 main.go:148] setting mtu 1500 for CNI 
	I1119 02:45:11.748777       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 02:45:11.748800       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T02:45:11Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 02:45:11.947217       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 02:45:11.947240       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 02:45:11.947248       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 02:45:12.026559       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1119 02:45:12.247980       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1119 02:45:12.248018       1 metrics.go:72] Registering metrics
	I1119 02:45:12.248105       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [003043020d52b3254d0fb8dce6b5de51f9100fc7116a74f3d11a53430cd79dfb] <==
	I1119 02:45:10.456848       1 aggregator.go:171] initial CRD sync complete...
	I1119 02:45:10.456856       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1119 02:45:10.456861       1 autoregister_controller.go:144] Starting autoregister controller
	I1119 02:45:10.456869       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1119 02:45:10.456875       1 cache.go:39] Caches are synced for autoregister controller
	I1119 02:45:10.456900       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1119 02:45:10.457008       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1119 02:45:10.457016       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1119 02:45:10.457241       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1119 02:45:10.457244       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1119 02:45:10.457717       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1119 02:45:10.457832       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1119 02:45:10.465955       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1119 02:45:10.510289       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 02:45:10.726308       1 controller.go:667] quota admission added evaluator for: namespaces
	I1119 02:45:10.750715       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1119 02:45:10.766404       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1119 02:45:10.772144       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1119 02:45:10.778351       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1119 02:45:10.806180       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.178.9"}
	I1119 02:45:10.817031       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.236.127"}
	I1119 02:45:11.360041       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 02:45:14.148370       1 controller.go:667] quota admission added evaluator for: endpoints
	I1119 02:45:14.199681       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1119 02:45:14.246857       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [2b5bcddd47a712b8ec423213ace360e69321fcb5d2ba522d16fede41bdbecf8f] <==
	I1119 02:45:13.795485       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1119 02:45:13.795509       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1119 02:45:13.795537       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1119 02:45:13.795547       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1119 02:45:13.795561       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1119 02:45:13.795565       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1119 02:45:13.795605       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1119 02:45:13.798566       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 02:45:13.801713       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1119 02:45:13.806906       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1119 02:45:13.806958       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1119 02:45:13.807006       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1119 02:45:13.807018       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1119 02:45:13.807025       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1119 02:45:13.809172       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1119 02:45:13.810328       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1119 02:45:13.814570       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1119 02:45:13.816805       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1119 02:45:13.816895       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1119 02:45:13.816978       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-956139"
	I1119 02:45:13.817030       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1119 02:45:13.818079       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1119 02:45:13.820152       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1119 02:45:13.822357       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1119 02:45:13.827642       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [036efdabb6a0eeacfb0cb5fd41704314387fa6013fc124eb80b7258169f8c320] <==
	I1119 02:45:11.604128       1 server_linux.go:53] "Using iptables proxy"
	I1119 02:45:11.670960       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 02:45:11.771715       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 02:45:11.771765       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1119 02:45:11.771906       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 02:45:11.788913       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 02:45:11.788971       1 server_linux.go:132] "Using iptables Proxier"
	I1119 02:45:11.793805       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 02:45:11.794222       1 server.go:527] "Version info" version="v1.34.1"
	I1119 02:45:11.794257       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 02:45:11.795569       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 02:45:11.795590       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 02:45:11.795658       1 config.go:200] "Starting service config controller"
	I1119 02:45:11.795674       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 02:45:11.795704       1 config.go:106] "Starting endpoint slice config controller"
	I1119 02:45:11.795712       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 02:45:11.795717       1 config.go:309] "Starting node config controller"
	I1119 02:45:11.795727       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 02:45:11.895794       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 02:45:11.895821       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1119 02:45:11.895804       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1119 02:45:11.895840       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [03bea9a3d49a8d6bd560dac25fbaca2c60e0f475532d71b1f7e8403b3b5770a3] <==
	I1119 02:45:09.480460       1 serving.go:386] Generated self-signed cert in-memory
	I1119 02:45:10.446460       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1119 02:45:10.446486       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 02:45:10.451322       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1119 02:45:10.451327       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 02:45:10.451362       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 02:45:10.451331       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1119 02:45:10.451383       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1119 02:45:10.451365       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1119 02:45:10.451783       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1119 02:45:10.451810       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1119 02:45:10.552515       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 02:45:10.552521       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1119 02:45:10.552713       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kubelet <==
	Nov 19 02:45:10 newest-cni-956139 kubelet[671]: E1119 02:45:10.252279     671 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-956139\" not found" node="newest-cni-956139"
	Nov 19 02:45:10 newest-cni-956139 kubelet[671]: E1119 02:45:10.252402     671 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-956139\" not found" node="newest-cni-956139"
	Nov 19 02:45:10 newest-cni-956139 kubelet[671]: E1119 02:45:10.252563     671 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-956139\" not found" node="newest-cni-956139"
	Nov 19 02:45:10 newest-cni-956139 kubelet[671]: I1119 02:45:10.420525     671 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-956139"
	Nov 19 02:45:10 newest-cni-956139 kubelet[671]: I1119 02:45:10.487892     671 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-956139"
	Nov 19 02:45:10 newest-cni-956139 kubelet[671]: I1119 02:45:10.487989     671 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-956139"
	Nov 19 02:45:10 newest-cni-956139 kubelet[671]: I1119 02:45:10.488023     671 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 19 02:45:10 newest-cni-956139 kubelet[671]: I1119 02:45:10.488906     671 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 19 02:45:10 newest-cni-956139 kubelet[671]: E1119 02:45:10.539000     671 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-956139\" already exists" pod="kube-system/kube-scheduler-newest-cni-956139"
	Nov 19 02:45:10 newest-cni-956139 kubelet[671]: I1119 02:45:10.539036     671 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-956139"
	Nov 19 02:45:10 newest-cni-956139 kubelet[671]: E1119 02:45:10.544899     671 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-956139\" already exists" pod="kube-system/etcd-newest-cni-956139"
	Nov 19 02:45:10 newest-cni-956139 kubelet[671]: I1119 02:45:10.544936     671 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-956139"
	Nov 19 02:45:10 newest-cni-956139 kubelet[671]: E1119 02:45:10.550119     671 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-956139\" already exists" pod="kube-system/kube-apiserver-newest-cni-956139"
	Nov 19 02:45:10 newest-cni-956139 kubelet[671]: I1119 02:45:10.550273     671 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-956139"
	Nov 19 02:45:10 newest-cni-956139 kubelet[671]: E1119 02:45:10.557685     671 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-956139\" already exists" pod="kube-system/kube-controller-manager-newest-cni-956139"
	Nov 19 02:45:11 newest-cni-956139 kubelet[671]: I1119 02:45:11.214248     671 apiserver.go:52] "Watching apiserver"
	Nov 19 02:45:11 newest-cni-956139 kubelet[671]: I1119 02:45:11.316645     671 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 19 02:45:11 newest-cni-956139 kubelet[671]: I1119 02:45:11.412557     671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7f447bc0-73e5-4008-b474-551b69553ce3-lib-modules\") pod \"kube-proxy-7frpm\" (UID: \"7f447bc0-73e5-4008-b474-551b69553ce3\") " pod="kube-system/kube-proxy-7frpm"
	Nov 19 02:45:11 newest-cni-956139 kubelet[671]: I1119 02:45:11.412727     671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7f447bc0-73e5-4008-b474-551b69553ce3-xtables-lock\") pod \"kube-proxy-7frpm\" (UID: \"7f447bc0-73e5-4008-b474-551b69553ce3\") " pod="kube-system/kube-proxy-7frpm"
	Nov 19 02:45:11 newest-cni-956139 kubelet[671]: I1119 02:45:11.412828     671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/20583cba-5129-470f-b6f9-869642b28f93-cni-cfg\") pod \"kindnet-s65nc\" (UID: \"20583cba-5129-470f-b6f9-869642b28f93\") " pod="kube-system/kindnet-s65nc"
	Nov 19 02:45:11 newest-cni-956139 kubelet[671]: I1119 02:45:11.412875     671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/20583cba-5129-470f-b6f9-869642b28f93-lib-modules\") pod \"kindnet-s65nc\" (UID: \"20583cba-5129-470f-b6f9-869642b28f93\") " pod="kube-system/kindnet-s65nc"
	Nov 19 02:45:11 newest-cni-956139 kubelet[671]: I1119 02:45:11.412917     671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/20583cba-5129-470f-b6f9-869642b28f93-xtables-lock\") pod \"kindnet-s65nc\" (UID: \"20583cba-5129-470f-b6f9-869642b28f93\") " pod="kube-system/kindnet-s65nc"
	Nov 19 02:45:12 newest-cni-956139 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 19 02:45:13 newest-cni-956139 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 19 02:45:13 newest-cni-956139 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-956139 -n newest-cni-956139
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-956139 -n newest-cni-956139: exit status 2 (308.948416ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-956139 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-l7vmx storage-provisioner dashboard-metrics-scraper-6ffb444bf9-5nfsz kubernetes-dashboard-855c9754f9-7frml
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-956139 describe pod coredns-66bc5c9577-l7vmx storage-provisioner dashboard-metrics-scraper-6ffb444bf9-5nfsz kubernetes-dashboard-855c9754f9-7frml
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-956139 describe pod coredns-66bc5c9577-l7vmx storage-provisioner dashboard-metrics-scraper-6ffb444bf9-5nfsz kubernetes-dashboard-855c9754f9-7frml: exit status 1 (55.784846ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-l7vmx" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-5nfsz" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-7frml" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-956139 describe pod coredns-66bc5c9577-l7vmx storage-provisioner dashboard-metrics-scraper-6ffb444bf9-5nfsz kubernetes-dashboard-855c9754f9-7frml: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (5.62s)

                                                
                                    

Test pass (264/328)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 5.05
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.21
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.34.1/json-events 4.34
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.07
18 TestDownloadOnly/v1.34.1/DeleteAll 0.21
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.14
20 TestDownloadOnlyKic 0.38
21 TestBinaryMirror 0.79
22 TestOffline 78.27
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 122.99
31 TestAddons/serial/GCPAuth/Namespaces 0.12
32 TestAddons/serial/GCPAuth/FakeCredentials 8.41
48 TestAddons/StoppedEnableDisable 16.61
49 TestCertOptions 27.13
50 TestCertExpiration 212.2
52 TestForceSystemdFlag 34.13
53 TestForceSystemdEnv 35.53
58 TestErrorSpam/setup 19.34
59 TestErrorSpam/start 0.63
60 TestErrorSpam/status 0.89
61 TestErrorSpam/pause 6.62
62 TestErrorSpam/unpause 4.89
63 TestErrorSpam/stop 2.51
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 37.25
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 6.04
70 TestFunctional/serial/KubeContext 0.04
71 TestFunctional/serial/KubectlGetPods 0.07
74 TestFunctional/serial/CacheCmd/cache/add_remote 2.66
75 TestFunctional/serial/CacheCmd/cache/add_local 1.1
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.27
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.47
80 TestFunctional/serial/CacheCmd/cache/delete 0.11
81 TestFunctional/serial/MinikubeKubectlCmd 0.11
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
83 TestFunctional/serial/ExtraConfig 53.19
84 TestFunctional/serial/ComponentHealth 0.06
85 TestFunctional/serial/LogsCmd 1.12
86 TestFunctional/serial/LogsFileCmd 1.13
87 TestFunctional/serial/InvalidService 3.75
89 TestFunctional/parallel/ConfigCmd 0.43
90 TestFunctional/parallel/DashboardCmd 9.66
91 TestFunctional/parallel/DryRun 0.36
92 TestFunctional/parallel/InternationalLanguage 0.16
93 TestFunctional/parallel/StatusCmd 0.99
98 TestFunctional/parallel/AddonsCmd 0.18
99 TestFunctional/parallel/PersistentVolumeClaim 23.25
101 TestFunctional/parallel/SSHCmd 0.62
102 TestFunctional/parallel/CpCmd 2.05
103 TestFunctional/parallel/MySQL 20.8
104 TestFunctional/parallel/FileSync 0.26
105 TestFunctional/parallel/CertSync 1.84
109 TestFunctional/parallel/NodeLabels 0.05
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.56
113 TestFunctional/parallel/License 0.45
114 TestFunctional/parallel/Version/short 0.06
115 TestFunctional/parallel/Version/components 0.46
117 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.45
118 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
120 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.21
122 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.07
123 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
127 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
128 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
129 TestFunctional/parallel/ImageCommands/ImageListTable 0.21
130 TestFunctional/parallel/ImageCommands/ImageListJson 0.21
131 TestFunctional/parallel/ImageCommands/ImageListYaml 0.33
132 TestFunctional/parallel/ImageCommands/ImageBuild 2.06
133 TestFunctional/parallel/ImageCommands/Setup 0.99
138 TestFunctional/parallel/ImageCommands/ImageRemove 0.47
141 TestFunctional/parallel/ProfileCmd/profile_not_create 0.38
142 TestFunctional/parallel/ProfileCmd/profile_list 0.37
143 TestFunctional/parallel/ProfileCmd/profile_json_output 0.37
144 TestFunctional/parallel/MountCmd/any-port 6.75
145 TestFunctional/parallel/MountCmd/specific-port 2.02
146 TestFunctional/parallel/MountCmd/VerifyCleanup 1.54
147 TestFunctional/parallel/UpdateContextCmd/no_changes 0.14
148 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.15
149 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.14
150 TestFunctional/parallel/ServiceCmd/List 1.7
151 TestFunctional/parallel/ServiceCmd/JSONOutput 1.69
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 142.59
163 TestMultiControlPlane/serial/DeployApp 4.29
164 TestMultiControlPlane/serial/PingHostFromPods 0.98
165 TestMultiControlPlane/serial/AddWorkerNode 53.97
166 TestMultiControlPlane/serial/NodeLabels 0.06
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.84
168 TestMultiControlPlane/serial/CopyFile 16.39
169 TestMultiControlPlane/serial/StopSecondaryNode 19.3
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.68
171 TestMultiControlPlane/serial/RestartSecondaryNode 8.61
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.84
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 115.98
174 TestMultiControlPlane/serial/DeleteSecondaryNode 10.51
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.66
176 TestMultiControlPlane/serial/StopCluster 31.55
177 TestMultiControlPlane/serial/RestartCluster 55.98
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.66
179 TestMultiControlPlane/serial/AddSecondaryNode 37.62
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.84
185 TestJSONOutput/start/Command 37.18
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 6.01
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.21
210 TestKicCustomNetwork/create_custom_network 26.98
211 TestKicCustomNetwork/use_default_bridge_network 23.59
212 TestKicExistingNetwork 23.46
213 TestKicCustomSubnet 27.5
214 TestKicStaticIP 22.42
215 TestMainNoArgs 0.06
216 TestMinikubeProfile 45.76
219 TestMountStart/serial/StartWithMountFirst 7.88
220 TestMountStart/serial/VerifyMountFirst 0.25
221 TestMountStart/serial/StartWithMountSecond 4.82
222 TestMountStart/serial/VerifyMountSecond 0.25
223 TestMountStart/serial/DeleteFirst 1.63
224 TestMountStart/serial/VerifyMountPostDelete 0.26
225 TestMountStart/serial/Stop 1.24
226 TestMountStart/serial/RestartStopped 7.43
227 TestMountStart/serial/VerifyMountPostStop 0.25
230 TestMultiNode/serial/FreshStart2Nodes 65.64
231 TestMultiNode/serial/DeployApp2Nodes 3.21
232 TestMultiNode/serial/PingHostFrom2Pods 0.68
233 TestMultiNode/serial/AddNode 26.13
234 TestMultiNode/serial/MultiNodeLabels 0.06
235 TestMultiNode/serial/ProfileList 0.62
236 TestMultiNode/serial/CopyFile 9.24
237 TestMultiNode/serial/StopNode 2.18
238 TestMultiNode/serial/StartAfterStop 7.03
239 TestMultiNode/serial/RestartKeepsNodes 84.09
240 TestMultiNode/serial/DeleteNode 5.13
241 TestMultiNode/serial/StopMultiNode 28.52
242 TestMultiNode/serial/RestartMultiNode 28.27
243 TestMultiNode/serial/ValidateNameConflict 23.67
248 TestPreload 109.95
250 TestScheduledStopUnix 94.22
253 TestInsufficientStorage 12.29
254 TestRunningBinaryUpgrade 48.3
256 TestKubernetesUpgrade 300.32
257 TestMissingContainerUpgrade 110.44
259 TestPause/serial/Start 54.26
260 TestPause/serial/SecondStartNoReconfiguration 6.48
263 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
264 TestNoKubernetes/serial/StartWithK8s 22.51
272 TestNetworkPlugins/group/false 3.71
276 TestStoppedBinaryUpgrade/Setup 0.62
277 TestStoppedBinaryUpgrade/Upgrade 104
278 TestNoKubernetes/serial/StartWithStopK8s 29.47
279 TestNoKubernetes/serial/Start 6.86
280 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
281 TestNoKubernetes/serial/VerifyK8sNotRunning 0.31
282 TestNoKubernetes/serial/ProfileList 1.21
283 TestNoKubernetes/serial/Stop 1.71
284 TestNoKubernetes/serial/StartNoArgs 10.21
285 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.29
286 TestStoppedBinaryUpgrade/MinikubeLogs 0.97
294 TestNetworkPlugins/group/auto/Start 36.89
295 TestNetworkPlugins/group/auto/KubeletFlags 0.27
296 TestNetworkPlugins/group/auto/NetCatPod 9.2
297 TestNetworkPlugins/group/kindnet/Start 74.69
298 TestNetworkPlugins/group/auto/DNS 0.13
299 TestNetworkPlugins/group/auto/Localhost 0.12
300 TestNetworkPlugins/group/auto/HairPin 0.12
301 TestNetworkPlugins/group/calico/Start 50.32
302 TestNetworkPlugins/group/custom-flannel/Start 45.71
303 TestNetworkPlugins/group/calico/ControllerPod 5.06
304 TestNetworkPlugins/group/calico/KubeletFlags 0.28
305 TestNetworkPlugins/group/calico/NetCatPod 8.18
306 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.27
307 TestNetworkPlugins/group/custom-flannel/NetCatPod 8.18
308 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
309 TestNetworkPlugins/group/calico/DNS 0.11
310 TestNetworkPlugins/group/calico/Localhost 0.08
311 TestNetworkPlugins/group/calico/HairPin 0.09
312 TestNetworkPlugins/group/custom-flannel/DNS 0.1
313 TestNetworkPlugins/group/custom-flannel/Localhost 0.08
314 TestNetworkPlugins/group/custom-flannel/HairPin 0.08
315 TestNetworkPlugins/group/kindnet/KubeletFlags 0.28
316 TestNetworkPlugins/group/kindnet/NetCatPod 9.18
317 TestNetworkPlugins/group/kindnet/DNS 0.12
318 TestNetworkPlugins/group/kindnet/Localhost 0.11
319 TestNetworkPlugins/group/kindnet/HairPin 0.1
320 TestNetworkPlugins/group/enable-default-cni/Start 73.36
321 TestNetworkPlugins/group/flannel/Start 46.67
322 TestNetworkPlugins/group/bridge/Start 67.45
323 TestNetworkPlugins/group/flannel/ControllerPod 6.01
324 TestNetworkPlugins/group/flannel/KubeletFlags 0.28
325 TestNetworkPlugins/group/flannel/NetCatPod 9.16
326 TestNetworkPlugins/group/flannel/DNS 0.1
327 TestNetworkPlugins/group/flannel/Localhost 0.08
328 TestNetworkPlugins/group/flannel/HairPin 0.08
329 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.28
330 TestNetworkPlugins/group/enable-default-cni/NetCatPod 8.17
331 TestNetworkPlugins/group/enable-default-cni/DNS 0.13
332 TestNetworkPlugins/group/enable-default-cni/Localhost 0.09
333 TestNetworkPlugins/group/enable-default-cni/HairPin 0.09
334 TestNetworkPlugins/group/bridge/KubeletFlags 0.3
335 TestNetworkPlugins/group/bridge/NetCatPod 9.18
337 TestStartStop/group/old-k8s-version/serial/FirstStart 55.6
338 TestNetworkPlugins/group/bridge/DNS 0.11
339 TestNetworkPlugins/group/bridge/Localhost 0.19
340 TestNetworkPlugins/group/bridge/HairPin 0.11
342 TestStartStop/group/no-preload/serial/FirstStart 59.65
344 TestStartStop/group/embed-certs/serial/FirstStart 46.57
346 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 44.63
347 TestStartStop/group/old-k8s-version/serial/DeployApp 7.25
349 TestStartStop/group/embed-certs/serial/DeployApp 8.23
350 TestStartStop/group/old-k8s-version/serial/Stop 16.09
351 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 7.24
353 TestStartStop/group/no-preload/serial/DeployApp 7.21
354 TestStartStop/group/embed-certs/serial/Stop 18.09
356 TestStartStop/group/default-k8s-diff-port/serial/Stop 18.03
357 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
359 TestStartStop/group/old-k8s-version/serial/SecondStart 26.46
360 TestStartStop/group/no-preload/serial/Stop 16.27
361 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
362 TestStartStop/group/embed-certs/serial/SecondStart 47.21
363 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
364 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 48.34
365 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
366 TestStartStop/group/no-preload/serial/SecondStart 52.91
367 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 11.01
368 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.07
369 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.23
372 TestStartStop/group/newest-cni/serial/FirstStart 26.37
373 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
374 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
375 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
376 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.23
378 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
379 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
380 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
382 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
383 TestStartStop/group/newest-cni/serial/DeployApp 0
385 TestStartStop/group/newest-cni/serial/Stop 2.52
386 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
388 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
389 TestStartStop/group/newest-cni/serial/SecondStart 11.37
390 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
391 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
392 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
x
+
TestDownloadOnly/v1.28.0/json-events (5.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-684189 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-684189 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.054023788s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (5.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1119 01:56:18.139116   14634 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1119 01:56:18.139200   14634 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21924-11126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-684189
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-684189: exit status 85 (67.494218ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-684189 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-684189 │ jenkins │ v1.37.0 │ 19 Nov 25 01:56 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 01:56:13
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 01:56:13.133491   14646 out.go:360] Setting OutFile to fd 1 ...
	I1119 01:56:13.133743   14646 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 01:56:13.133753   14646 out.go:374] Setting ErrFile to fd 2...
	I1119 01:56:13.133760   14646 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 01:56:13.133950   14646 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-11126/.minikube/bin
	W1119 01:56:13.134075   14646 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21924-11126/.minikube/config/config.json: open /home/jenkins/minikube-integration/21924-11126/.minikube/config/config.json: no such file or directory
	I1119 01:56:13.134549   14646 out.go:368] Setting JSON to true
	I1119 01:56:13.135408   14646 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2320,"bootTime":1763515053,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 01:56:13.135500   14646 start.go:143] virtualization: kvm guest
	I1119 01:56:13.137533   14646 out.go:99] [download-only-684189] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1119 01:56:13.137631   14646 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/21924-11126/.minikube/cache/preloaded-tarball: no such file or directory
	I1119 01:56:13.137649   14646 notify.go:221] Checking for updates...
	I1119 01:56:13.138887   14646 out.go:171] MINIKUBE_LOCATION=21924
	I1119 01:56:13.140117   14646 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 01:56:13.141332   14646 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21924-11126/kubeconfig
	I1119 01:56:13.142415   14646 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-11126/.minikube
	I1119 01:56:13.143428   14646 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1119 01:56:13.145479   14646 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1119 01:56:13.145695   14646 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 01:56:13.167374   14646 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1119 01:56:13.167448   14646 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 01:56:13.551066   14646 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-11-19 01:56:13.542475896 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652060160 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 01:56:13.551158   14646 docker.go:319] overlay module found
	I1119 01:56:13.552598   14646 out.go:99] Using the docker driver based on user configuration
	I1119 01:56:13.552631   14646 start.go:309] selected driver: docker
	I1119 01:56:13.552647   14646 start.go:930] validating driver "docker" against <nil>
	I1119 01:56:13.552736   14646 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 01:56:13.611901   14646 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-11-19 01:56:13.601971706 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652060160 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 01:56:13.612088   14646 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1119 01:56:13.612593   14646 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1119 01:56:13.612752   14646 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1119 01:56:13.614301   14646 out.go:171] Using Docker driver with root privileges
	I1119 01:56:13.615366   14646 cni.go:84] Creating CNI manager for ""
	I1119 01:56:13.615417   14646 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 01:56:13.615426   14646 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1119 01:56:13.615489   14646 start.go:353] cluster config:
	{Name:download-only-684189 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-684189 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 01:56:13.616674   14646 out.go:99] Starting "download-only-684189" primary control-plane node in "download-only-684189" cluster
	I1119 01:56:13.616693   14646 cache.go:134] Beginning downloading kic base image for docker with crio
	I1119 01:56:13.617676   14646 out.go:99] Pulling base image v0.0.48-1763507788-21924 ...
	I1119 01:56:13.617698   14646 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1119 01:56:13.617788   14646 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1119 01:56:13.633118   14646 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a to local cache
	I1119 01:56:13.633268   14646 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local cache directory
	I1119 01:56:13.633346   14646 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a to local cache
	I1119 01:56:13.641663   14646 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1119 01:56:13.641686   14646 cache.go:65] Caching tarball of preloaded images
	I1119 01:56:13.641793   14646 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1119 01:56:13.643252   14646 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1119 01:56:13.643271   14646 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1119 01:56:13.683285   14646 preload.go:295] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1119 01:56:13.683367   14646 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21924-11126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1119 01:56:17.099968   14646 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1119 01:56:17.100368   14646 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/download-only-684189/config.json ...
	I1119 01:56:17.100403   14646 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/download-only-684189/config.json: {Name:mkc82c0186d4ef031f442711ce049e7ffe260d70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 01:56:17.100590   14646 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1119 01:56:17.100756   14646 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/21924-11126/.minikube/cache/linux/amd64/v1.28.0/kubectl
	I1119 01:56:17.293647   14646 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a as a tarball
	I1119 01:56:18.131423   14646 cache.go:243] Successfully downloaded all kic artifacts
	
	
	* The control-plane node download-only-684189 host does not exist
	  To start a cluster, run: "minikube start -p download-only-684189"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-684189
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (4.34s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-306316 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-306316 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.33595185s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (4.34s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1119 01:56:22.882672   14634 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1119 01:56:22.882716   14634 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21924-11126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-306316
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-306316: exit status 85 (67.636422ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-684189 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-684189 │ jenkins │ v1.37.0 │ 19 Nov 25 01:56 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 19 Nov 25 01:56 UTC │ 19 Nov 25 01:56 UTC │
	│ delete  │ -p download-only-684189                                                                                                                                                   │ download-only-684189 │ jenkins │ v1.37.0 │ 19 Nov 25 01:56 UTC │ 19 Nov 25 01:56 UTC │
	│ start   │ -o=json --download-only -p download-only-306316 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-306316 │ jenkins │ v1.37.0 │ 19 Nov 25 01:56 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 01:56:18
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 01:56:18.595536   15009 out.go:360] Setting OutFile to fd 1 ...
	I1119 01:56:18.595636   15009 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 01:56:18.595647   15009 out.go:374] Setting ErrFile to fd 2...
	I1119 01:56:18.595653   15009 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 01:56:18.595852   15009 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-11126/.minikube/bin
	I1119 01:56:18.596288   15009 out.go:368] Setting JSON to true
	I1119 01:56:18.597084   15009 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2326,"bootTime":1763515053,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 01:56:18.597162   15009 start.go:143] virtualization: kvm guest
	I1119 01:56:18.598732   15009 out.go:99] [download-only-306316] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1119 01:56:18.598876   15009 notify.go:221] Checking for updates...
	I1119 01:56:18.599917   15009 out.go:171] MINIKUBE_LOCATION=21924
	I1119 01:56:18.601072   15009 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 01:56:18.602266   15009 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21924-11126/kubeconfig
	I1119 01:56:18.603241   15009 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-11126/.minikube
	I1119 01:56:18.604247   15009 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1119 01:56:18.606302   15009 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1119 01:56:18.606589   15009 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 01:56:18.628616   15009 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1119 01:56:18.628683   15009 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 01:56:18.681825   15009 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:52 SystemTime:2025-11-19 01:56:18.67330723 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652060160 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 01:56:18.681925   15009 docker.go:319] overlay module found
	I1119 01:56:18.683254   15009 out.go:99] Using the docker driver based on user configuration
	I1119 01:56:18.683287   15009 start.go:309] selected driver: docker
	I1119 01:56:18.683295   15009 start.go:930] validating driver "docker" against <nil>
	I1119 01:56:18.683369   15009 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 01:56:18.734461   15009 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:52 SystemTime:2025-11-19 01:56:18.726269551 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652060160 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 01:56:18.734603   15009 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1119 01:56:18.735045   15009 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1119 01:56:18.735182   15009 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1119 01:56:18.736852   15009 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-306316 host does not exist
	  To start a cluster, run: "minikube start -p download-only-306316"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-306316
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.38s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-148027 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-148027" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-148027
--- PASS: TestDownloadOnlyKic (0.38s)

                                                
                                    
x
+
TestBinaryMirror (0.79s)

                                                
                                                
=== RUN   TestBinaryMirror
I1119 01:56:23.942014   14634 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-075616 --alsologtostderr --binary-mirror http://127.0.0.1:42109 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-075616" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-075616
--- PASS: TestBinaryMirror (0.79s)

                                                
                                    
x
+
TestOffline (78.27s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-852644 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-852644 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m15.815169993s)
helpers_test.go:175: Cleaning up "offline-crio-852644" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-852644
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-852644: (2.452187825s)
--- PASS: TestOffline (78.27s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-167289
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-167289: exit status 85 (59.09267ms)

                                                
                                                
-- stdout --
	* Profile "addons-167289" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-167289"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-167289
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-167289: exit status 85 (59.267619ms)

                                                
                                                
-- stdout --
	* Profile "addons-167289" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-167289"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (122.99s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-167289 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-167289 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m2.988442956s)
--- PASS: TestAddons/Setup (122.99s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-167289 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-167289 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.41s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-167289 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-167289 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [d4b4f227-0052-445f-a84d-de63013a9d7f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [d4b4f227-0052-445f-a84d-de63013a9d7f] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.002798332s
addons_test.go:694: (dbg) Run:  kubectl --context addons-167289 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-167289 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-167289 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.41s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (16.61s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-167289
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-167289: (16.339969347s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-167289
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-167289
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-167289
--- PASS: TestAddons/StoppedEnableDisable (16.61s)

                                                
                                    
x
+
TestCertOptions (27.13s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-336989 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-336989 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (24.140140047s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-336989 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-336989 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-336989 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-336989" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-336989
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-336989: (2.347425157s)
--- PASS: TestCertOptions (27.13s)

                                                
                                    
x
+
TestCertExpiration (212.2s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-455061 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-455061 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (21.711348327s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-455061 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-455061 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (8.077895934s)
helpers_test.go:175: Cleaning up "cert-expiration-455061" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-455061
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-455061: (2.412334554s)
--- PASS: TestCertExpiration (212.20s)

                                                
                                    
x
+
TestForceSystemdFlag (34.13s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-103780 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1119 02:35:24.851367   14634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/functional-345998/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-103780 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (31.204315513s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-103780 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-103780" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-103780
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-103780: (2.601079562s)
--- PASS: TestForceSystemdFlag (34.13s)

                                                
                                    
x
+
TestForceSystemdEnv (35.53s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-924069 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-924069 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (33.092253231s)
helpers_test.go:175: Cleaning up "force-systemd-env-924069" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-924069
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-924069: (2.434516936s)
--- PASS: TestForceSystemdEnv (35.53s)

                                                
                                    
x
+
TestErrorSpam/setup (19.34s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-042315 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-042315 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-042315 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-042315 --driver=docker  --container-runtime=crio: (19.343067673s)
--- PASS: TestErrorSpam/setup (19.34s)

                                                
                                    
x
+
TestErrorSpam/start (0.63s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-042315 --log_dir /tmp/nospam-042315 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-042315 --log_dir /tmp/nospam-042315 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-042315 --log_dir /tmp/nospam-042315 start --dry-run
--- PASS: TestErrorSpam/start (0.63s)

                                                
                                    
x
+
TestErrorSpam/status (0.89s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-042315 --log_dir /tmp/nospam-042315 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-042315 --log_dir /tmp/nospam-042315 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-042315 --log_dir /tmp/nospam-042315 status
--- PASS: TestErrorSpam/status (0.89s)

                                                
                                    
x
+
TestErrorSpam/pause (6.62s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-042315 --log_dir /tmp/nospam-042315 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-042315 --log_dir /tmp/nospam-042315 pause: exit status 80 (1.934040721s)

                                                
                                                
-- stdout --
	* Pausing node nospam-042315 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:01:56Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-042315 --log_dir /tmp/nospam-042315 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-042315 --log_dir /tmp/nospam-042315 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-042315 --log_dir /tmp/nospam-042315 pause: exit status 80 (2.289766851s)

                                                
                                                
-- stdout --
	* Pausing node nospam-042315 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:01:58Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-042315 --log_dir /tmp/nospam-042315 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-042315 --log_dir /tmp/nospam-042315 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-042315 --log_dir /tmp/nospam-042315 pause: exit status 80 (2.392603867s)

                                                
                                                
-- stdout --
	* Pausing node nospam-042315 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:02:01Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-042315 --log_dir /tmp/nospam-042315 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (6.62s)

                                                
                                    
x
+
TestErrorSpam/unpause (4.89s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-042315 --log_dir /tmp/nospam-042315 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-042315 --log_dir /tmp/nospam-042315 unpause: exit status 80 (1.350820286s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-042315 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:02:02Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-042315 --log_dir /tmp/nospam-042315 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-042315 --log_dir /tmp/nospam-042315 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-042315 --log_dir /tmp/nospam-042315 unpause: exit status 80 (1.635074978s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-042315 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:02:04Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-042315 --log_dir /tmp/nospam-042315 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-042315 --log_dir /tmp/nospam-042315 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-042315 --log_dir /tmp/nospam-042315 unpause: exit status 80 (1.906513524s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-042315 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:02:06Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-042315 --log_dir /tmp/nospam-042315 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (4.89s)

                                                
                                    
x
+
TestErrorSpam/stop (2.51s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-042315 --log_dir /tmp/nospam-042315 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-042315 --log_dir /tmp/nospam-042315 stop: (2.31369577s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-042315 --log_dir /tmp/nospam-042315 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-042315 --log_dir /tmp/nospam-042315 stop
--- PASS: TestErrorSpam/stop (2.51s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21924-11126/.minikube/files/etc/test/nested/copy/14634/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (37.25s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-345998 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-345998 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (37.248578726s)
--- PASS: TestFunctional/serial/StartWithProxy (37.25s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.04s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1119 02:02:50.425817   14634 config.go:182] Loaded profile config "functional-345998": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-345998 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-345998 --alsologtostderr -v=8: (6.040010983s)
functional_test.go:678: soft start took 6.040851232s for "functional-345998" cluster.
I1119 02:02:56.466323   14634 config.go:182] Loaded profile config "functional-345998": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (6.04s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-345998 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.66s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-345998 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-345998 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-345998 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.66s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-345998 /tmp/TestFunctionalserialCacheCmdcacheadd_local3359857738/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-345998 cache add minikube-local-cache-test:functional-345998
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-345998 cache delete minikube-local-cache-test:functional-345998
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-345998
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-345998 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.47s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-345998 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-345998 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-345998 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (270.184773ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-345998 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-345998 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.47s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-345998 kubectl -- --context functional-345998 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-345998 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (53.19s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-345998 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1119 02:03:28.336127   14634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/addons-167289/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:03:28.342491   14634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/addons-167289/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:03:28.353831   14634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/addons-167289/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:03:28.375170   14634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/addons-167289/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:03:28.416498   14634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/addons-167289/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:03:28.497882   14634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/addons-167289/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:03:28.659404   14634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/addons-167289/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:03:28.981085   14634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/addons-167289/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:03:29.623074   14634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/addons-167289/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:03:30.904625   14634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/addons-167289/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:03:33.467477   14634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/addons-167289/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:03:38.589789   14634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/addons-167289/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:03:48.832086   14634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/addons-167289/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-345998 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (53.18926502s)
functional_test.go:776: restart took 53.189410444s for "functional-345998" cluster.
I1119 02:03:55.727181   14634 config.go:182] Loaded profile config "functional-345998": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (53.19s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-345998 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.12s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-345998 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-345998 logs: (1.11486082s)
--- PASS: TestFunctional/serial/LogsCmd (1.12s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.13s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-345998 logs --file /tmp/TestFunctionalserialLogsFileCmd2293224481/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-345998 logs --file /tmp/TestFunctionalserialLogsFileCmd2293224481/001/logs.txt: (1.124368768s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.13s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.75s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-345998 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-345998
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-345998: exit status 115 (328.792554ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31922 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-345998 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.75s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-345998 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-345998 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-345998 config get cpus: exit status 14 (97.509493ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-345998 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-345998 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-345998 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-345998 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-345998 config get cpus: exit status 14 (58.628674ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-345998 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-345998 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 51575: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.66s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-345998 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-345998 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (157.091167ms)

                                                
                                                
-- stdout --
	* [functional-345998] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21924
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21924-11126/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-11126/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 02:04:27.317295   51713 out.go:360] Setting OutFile to fd 1 ...
	I1119 02:04:27.317399   51713 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:04:27.317407   51713 out.go:374] Setting ErrFile to fd 2...
	I1119 02:04:27.317411   51713 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:04:27.317620   51713 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-11126/.minikube/bin
	I1119 02:04:27.318024   51713 out.go:368] Setting JSON to false
	I1119 02:04:27.318968   51713 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2814,"bootTime":1763515053,"procs":241,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 02:04:27.319048   51713 start.go:143] virtualization: kvm guest
	I1119 02:04:27.320818   51713 out.go:179] * [functional-345998] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1119 02:04:27.321917   51713 notify.go:221] Checking for updates...
	I1119 02:04:27.321946   51713 out.go:179]   - MINIKUBE_LOCATION=21924
	I1119 02:04:27.323255   51713 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 02:04:27.324544   51713 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21924-11126/kubeconfig
	I1119 02:04:27.325670   51713 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-11126/.minikube
	I1119 02:04:27.326957   51713 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1119 02:04:27.328327   51713 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 02:04:27.330639   51713 config.go:182] Loaded profile config "functional-345998": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:04:27.331376   51713 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 02:04:27.353591   51713 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1119 02:04:27.353730   51713 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 02:04:27.407532   51713 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-19 02:04:27.398971068 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652060160 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 02:04:27.407644   51713 docker.go:319] overlay module found
	I1119 02:04:27.409897   51713 out.go:179] * Using the docker driver based on existing profile
	I1119 02:04:27.410967   51713 start.go:309] selected driver: docker
	I1119 02:04:27.410983   51713 start.go:930] validating driver "docker" against &{Name:functional-345998 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-345998 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:04:27.411072   51713 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 02:04:27.412852   51713 out.go:203] 
	W1119 02:04:27.413927   51713 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1119 02:04:27.414918   51713 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-345998 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-345998 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-345998 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (156.469268ms)

                                                
                                                
-- stdout --
	* [functional-345998] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21924
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21924-11126/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-11126/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 02:04:27.937918   52074 out.go:360] Setting OutFile to fd 1 ...
	I1119 02:04:27.938015   52074 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:04:27.938024   52074 out.go:374] Setting ErrFile to fd 2...
	I1119 02:04:27.938028   52074 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:04:27.938289   52074 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-11126/.minikube/bin
	I1119 02:04:27.938694   52074 out.go:368] Setting JSON to false
	I1119 02:04:27.939581   52074 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2815,"bootTime":1763515053,"procs":241,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 02:04:27.939666   52074 start.go:143] virtualization: kvm guest
	I1119 02:04:27.941159   52074 out.go:179] * [functional-345998] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1119 02:04:27.942531   52074 notify.go:221] Checking for updates...
	I1119 02:04:27.942563   52074 out.go:179]   - MINIKUBE_LOCATION=21924
	I1119 02:04:27.943677   52074 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 02:04:27.944793   52074 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21924-11126/kubeconfig
	I1119 02:04:27.945883   52074 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-11126/.minikube
	I1119 02:04:27.946901   52074 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1119 02:04:27.947935   52074 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 02:04:27.949241   52074 config.go:182] Loaded profile config "functional-345998": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:04:27.949674   52074 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 02:04:27.971517   52074 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1119 02:04:27.971624   52074 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 02:04:28.027963   52074 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-19 02:04:28.01755285 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652060160 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 02:04:28.028099   52074 docker.go:319] overlay module found
	I1119 02:04:28.030501   52074 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1119 02:04:28.031577   52074 start.go:309] selected driver: docker
	I1119 02:04:28.031598   52074 start.go:930] validating driver "docker" against &{Name:functional-345998 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-345998 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:04:28.031681   52074 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 02:04:28.033341   52074 out.go:203] 
	W1119 02:04:28.034409   52074 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1119 02:04:28.035651   52074 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-345998 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-345998 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-345998 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.99s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-345998 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-345998 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (23.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [78d8c26f-0808-4d88-b443-a2de35190ec0] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003590005s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-345998 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-345998 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-345998 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-345998 apply -f testdata/storage-provisioner/pod.yaml
I1119 02:04:08.109703   14634 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [7547d36b-3559-4829-8a00-efb383ad6166] Pending
helpers_test.go:352: "sp-pod" [7547d36b-3559-4829-8a00-efb383ad6166] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E1119 02:04:09.314185   14634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/addons-167289/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "sp-pod" [7547d36b-3559-4829-8a00-efb383ad6166] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.003537626s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-345998 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-345998 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-345998 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [006101b3-d68d-4527-b48c-885fd5fd70de] Pending
helpers_test.go:352: "sp-pod" [006101b3-d68d-4527-b48c-885fd5fd70de] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [006101b3-d68d-4527-b48c-885fd5fd70de] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003059039s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-345998 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (23.25s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-345998 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-345998 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-345998 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-345998 ssh -n functional-345998 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-345998 cp functional-345998:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2373048782/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-345998 ssh -n functional-345998 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-345998 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-345998 ssh -n functional-345998 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.05s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (20.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-345998 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-pktdw" [45894abb-546d-4bc5-92a6-f5c9e882674a] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
2025/11/19 02:04:35 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
helpers_test.go:352: "mysql-5bb876957f-pktdw" [45894abb-546d-4bc5-92a6-f5c9e882674a] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 19.002980746s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-345998 exec mysql-5bb876957f-pktdw -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-345998 exec mysql-5bb876957f-pktdw -- mysql -ppassword -e "show databases;": exit status 1 (82.409161ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1119 02:04:47.302442   14634 retry.go:31] will retry after 1.454421298s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-345998 exec mysql-5bb876957f-pktdw -- mysql -ppassword -e "show databases;"
E1119 02:04:50.276313   14634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/addons-167289/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:06:12.198045   14634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/addons-167289/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:08:28.327052   14634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/addons-167289/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:08:56.039943   14634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/addons-167289/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:13:28.327658   14634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/addons-167289/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/MySQL (20.80s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/14634/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-345998 ssh "sudo cat /etc/test/nested/copy/14634/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/14634.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-345998 ssh "sudo cat /etc/ssl/certs/14634.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/14634.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-345998 ssh "sudo cat /usr/share/ca-certificates/14634.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-345998 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/146342.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-345998 ssh "sudo cat /etc/ssl/certs/146342.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/146342.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-345998 ssh "sudo cat /usr/share/ca-certificates/146342.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-345998 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.84s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-345998 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-345998 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-345998 ssh "sudo systemctl is-active docker": exit status 1 (290.169001ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-345998 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-345998 ssh "sudo systemctl is-active containerd": exit status 1 (267.774967ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-345998 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-345998 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-345998 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-345998 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-345998 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 46332: os: process already finished
helpers_test.go:519: unable to terminate pid 45992: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-345998 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-345998 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-345998 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [f47b5ede-c20c-4732-8b16-fbc510330b8e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [f47b5ede-c20c-4732-8b16-fbc510330b8e] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.005213557s
I1119 02:04:10.439927   14634 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.21s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-345998 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.102.78.212 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-345998 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-345998 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-345998 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-345998 image ls --format short --alsologtostderr:
I1119 02:04:40.602702   54104 out.go:360] Setting OutFile to fd 1 ...
I1119 02:04:40.602948   54104 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1119 02:04:40.602958   54104 out.go:374] Setting ErrFile to fd 2...
I1119 02:04:40.602961   54104 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1119 02:04:40.603183   54104 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-11126/.minikube/bin
I1119 02:04:40.603790   54104 config.go:182] Loaded profile config "functional-345998": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1119 02:04:40.603884   54104 config.go:182] Loaded profile config "functional-345998": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1119 02:04:40.604334   54104 cli_runner.go:164] Run: docker container inspect functional-345998 --format={{.State.Status}}
I1119 02:04:40.621995   54104 ssh_runner.go:195] Run: systemctl --version
I1119 02:04:40.622041   54104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-345998
I1119 02:04:40.637739   54104 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/functional-345998/id_rsa Username:docker}
I1119 02:04:40.730912   54104 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-345998 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-345998 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ localhost/my-image                      │ functional-345998  │ 6e57577f0426b │ 1.47MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
│ docker.io/library/nginx                 │ alpine             │ d4918ca78576a │ 54.3MB │
│ docker.io/library/nginx                 │ latest             │ 60adc2e137e75 │ 155MB  │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-345998 image ls --format table --alsologtostderr:
I1119 02:04:43.418553   54883 out.go:360] Setting OutFile to fd 1 ...
I1119 02:04:43.418661   54883 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1119 02:04:43.418672   54883 out.go:374] Setting ErrFile to fd 2...
I1119 02:04:43.418678   54883 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1119 02:04:43.418893   54883 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-11126/.minikube/bin
I1119 02:04:43.419398   54883 config.go:182] Loaded profile config "functional-345998": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1119 02:04:43.419491   54883 config.go:182] Loaded profile config "functional-345998": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1119 02:04:43.419837   54883 cli_runner.go:164] Run: docker container inspect functional-345998 --format={{.State.Status}}
I1119 02:04:43.436629   54883 ssh_runner.go:195] Run: systemctl --version
I1119 02:04:43.436668   54883 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-345998
I1119 02:04:43.453064   54883 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/functional-345998/id_rsa Username:docker}
I1119 02:04:43.544481   54883 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-345998 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-345998 image ls --format json --alsologtostderr:
[{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"973c0bf121238986812f29d1728f05c6bd0ea77c3d012e103ea622fc3fb54ef5","repoDigests":["docker.io/library/02102dea50d4118bd10d179ed112ebf5088575f78f7f07cf3d93da78f8b06ae4-tmp@sha256:b9491711311331de5e72573646924ac8064ca23ee03ce7f0289b3c591e4577e5"],"repoTags":[],"size":"1466132"},{"id":"60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5","repoDigests":["docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42","docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541"],"repoTags":["docker.io/library/nginx:latest"],"size":"155491845"},{"id":"beae173ccac6ad749f76713cf44
40fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"6e57577f0426baea6f07044d3343adfda59f1a1c04439ae6eb458cdf8d2443c3","repoDigests":["localhost/my-image@sha256:3486fbd3ca34bc25537d2623eaee1b3ab5121326df793340844db179cd483eb3"],"repoTags":["localhost/my-image:functional-345998"],"size":"1468744"},{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e
974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a","registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"73138073"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9","repoDigests":["docker.io/library/nginx@sha256:667473807103639
a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7","docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54252718"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"76004181"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8
d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964","registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"89046001"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f"
,"repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998c
d84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31","registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"53844823"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-345998 image ls --format json --alsologtostderr:
I1119 02:04:43.207981   54828 out.go:360] Setting OutFile to fd 1 ...
I1119 02:04:43.208093   54828 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1119 02:04:43.208104   54828 out.go:374] Setting ErrFile to fd 2...
I1119 02:04:43.208110   54828 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1119 02:04:43.208338   54828 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-11126/.minikube/bin
I1119 02:04:43.208878   54828 config.go:182] Loaded profile config "functional-345998": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1119 02:04:43.208994   54828 config.go:182] Loaded profile config "functional-345998": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1119 02:04:43.210293   54828 cli_runner.go:164] Run: docker container inspect functional-345998 --format={{.State.Status}}
I1119 02:04:43.227915   54828 ssh_runner.go:195] Run: systemctl --version
I1119 02:04:43.227965   54828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-345998
I1119 02:04:43.244371   54828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/functional-345998/id_rsa Username:docker}
I1119 02:04:43.336377   54828 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-345998 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-345998 image ls --format yaml --alsologtostderr:
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "89046001"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9
repoDigests:
- docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "54252718"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
- registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "73138073"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "53844823"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
- registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "76004181"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5
repoDigests:
- docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42
- docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541
repoTags:
- docker.io/library/nginx:latest
size: "155491845"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-345998 image ls --format yaml --alsologtostderr:
I1119 02:04:40.819608   54191 out.go:360] Setting OutFile to fd 1 ...
I1119 02:04:40.819875   54191 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1119 02:04:40.819885   54191 out.go:374] Setting ErrFile to fd 2...
I1119 02:04:40.819889   54191 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1119 02:04:40.820092   54191 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-11126/.minikube/bin
I1119 02:04:40.820687   54191 config.go:182] Loaded profile config "functional-345998": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1119 02:04:40.820791   54191 config.go:182] Loaded profile config "functional-345998": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1119 02:04:40.821242   54191 cli_runner.go:164] Run: docker container inspect functional-345998 --format={{.State.Status}}
I1119 02:04:40.839088   54191 ssh_runner.go:195] Run: systemctl --version
I1119 02:04:40.839143   54191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-345998
I1119 02:04:40.856000   54191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/functional-345998/id_rsa Username:docker}
I1119 02:04:40.949728   54191 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-345998 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-345998 ssh pgrep buildkitd: exit status 1 (266.626868ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-345998 image build -t localhost/my-image:functional-345998 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-345998 image build -t localhost/my-image:functional-345998 testdata/build --alsologtostderr: (1.579739736s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-345998 image build -t localhost/my-image:functional-345998 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 973c0bf1212
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-345998
--> 6e57577f042
Successfully tagged localhost/my-image:functional-345998
6e57577f0426baea6f07044d3343adfda59f1a1c04439ae6eb458cdf8d2443c3
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-345998 image build -t localhost/my-image:functional-345998 testdata/build --alsologtostderr:
I1119 02:04:41.410702   54367 out.go:360] Setting OutFile to fd 1 ...
I1119 02:04:41.410866   54367 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1119 02:04:41.410879   54367 out.go:374] Setting ErrFile to fd 2...
I1119 02:04:41.410885   54367 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1119 02:04:41.411079   54367 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-11126/.minikube/bin
I1119 02:04:41.411622   54367 config.go:182] Loaded profile config "functional-345998": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1119 02:04:41.412312   54367 config.go:182] Loaded profile config "functional-345998": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1119 02:04:41.412735   54367 cli_runner.go:164] Run: docker container inspect functional-345998 --format={{.State.Status}}
I1119 02:04:41.431044   54367 ssh_runner.go:195] Run: systemctl --version
I1119 02:04:41.431093   54367 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-345998
I1119 02:04:41.447347   54367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/functional-345998/id_rsa Username:docker}
I1119 02:04:41.539544   54367 build_images.go:162] Building image from path: /tmp/build.65245481.tar
I1119 02:04:41.539611   54367 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1119 02:04:41.547233   54367 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.65245481.tar
I1119 02:04:41.550501   54367 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.65245481.tar: stat -c "%s %y" /var/lib/minikube/build/build.65245481.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.65245481.tar': No such file or directory
I1119 02:04:41.550524   54367 ssh_runner.go:362] scp /tmp/build.65245481.tar --> /var/lib/minikube/build/build.65245481.tar (3072 bytes)
I1119 02:04:41.567022   54367 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.65245481
I1119 02:04:41.574000   54367 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.65245481 -xf /var/lib/minikube/build/build.65245481.tar
I1119 02:04:41.581145   54367 crio.go:315] Building image: /var/lib/minikube/build/build.65245481
I1119 02:04:41.581189   54367 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-345998 /var/lib/minikube/build/build.65245481 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1119 02:04:42.916532   54367 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-345998 /var/lib/minikube/build/build.65245481 --cgroup-manager=cgroupfs: (1.335310793s)
I1119 02:04:42.916609   54367 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.65245481
I1119 02:04:42.924692   54367 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.65245481.tar
I1119 02:04:42.932200   54367 build_images.go:218] Built localhost/my-image:functional-345998 from /tmp/build.65245481.tar
I1119 02:04:42.932232   54367 build_images.go:134] succeeded building to: functional-345998
I1119 02:04:42.932238   54367 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-345998 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-345998
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.99s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-345998 image rm kicbase/echo-server:functional-345998 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-345998 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "315.221807ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "56.31035ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "314.037182ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "57.531596ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-345998 /tmp/TestFunctionalparallelMountCmdany-port2254626636/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1763517856945576640" to /tmp/TestFunctionalparallelMountCmdany-port2254626636/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1763517856945576640" to /tmp/TestFunctionalparallelMountCmdany-port2254626636/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1763517856945576640" to /tmp/TestFunctionalparallelMountCmdany-port2254626636/001/test-1763517856945576640
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-345998 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-345998 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (279.785799ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1119 02:04:17.225654   14634 retry.go:31] will retry after 625.2591ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-345998 ssh "findmnt -T /mount-9p | grep 9p"
I1119 02:04:17.943209   14634 detect.go:223] nested VM detected
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-345998 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov 19 02:04 created-by-test
-rw-r--r-- 1 docker docker 24 Nov 19 02:04 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov 19 02:04 test-1763517856945576640
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-345998 ssh cat /mount-9p/test-1763517856945576640
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-345998 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [0ba9e634-eac4-4a11-994d-98fce790ba9e] Pending
helpers_test.go:352: "busybox-mount" [0ba9e634-eac4-4a11-994d-98fce790ba9e] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [0ba9e634-eac4-4a11-994d-98fce790ba9e] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [0ba9e634-eac4-4a11-994d-98fce790ba9e] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.002970097s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-345998 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-345998 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-345998 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-345998 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-345998 /tmp/TestFunctionalparallelMountCmdany-port2254626636/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.75s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-345998 /tmp/TestFunctionalparallelMountCmdspecific-port285153206/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-345998 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-345998 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (270.842944ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1119 02:04:23.967621   14634 retry.go:31] will retry after 736.739003ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-345998 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-345998 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-345998 /tmp/TestFunctionalparallelMountCmdspecific-port285153206/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-345998 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-345998 ssh "sudo umount -f /mount-9p": exit status 1 (271.033144ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-345998 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-345998 /tmp/TestFunctionalparallelMountCmdspecific-port285153206/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.02s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-345998 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3442542205/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-345998 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3442542205/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-345998 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3442542205/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-345998 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-345998 ssh "findmnt -T" /mount1: exit status 1 (356.604132ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1119 02:04:26.079249   14634 retry.go:31] will retry after 298.849149ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-345998 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-345998 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-345998 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-345998 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-345998 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3442542205/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-345998 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3442542205/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-345998 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3442542205/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.54s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-345998 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-345998 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-345998 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-345998 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-345998 service list: (1.700929354s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-345998 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-345998 service list -o json: (1.68565225s)
functional_test.go:1504: Took "1.685755536s" to run "out/minikube-linux-amd64 -p functional-345998 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.69s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-345998
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-345998
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-345998
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (142.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-960334 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-960334 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (2m21.916400275s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-960334 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (142.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-960334 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-960334 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-960334 kubectl -- rollout status deployment/busybox: (2.418456149s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-960334 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-960334 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-960334 kubectl -- exec busybox-7b57f96db7-4nd8k -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-960334 kubectl -- exec busybox-7b57f96db7-8nfwl -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-960334 kubectl -- exec busybox-7b57f96db7-bnzx8 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-960334 kubectl -- exec busybox-7b57f96db7-4nd8k -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-960334 kubectl -- exec busybox-7b57f96db7-8nfwl -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-960334 kubectl -- exec busybox-7b57f96db7-bnzx8 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-960334 kubectl -- exec busybox-7b57f96db7-4nd8k -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-960334 kubectl -- exec busybox-7b57f96db7-8nfwl -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-960334 kubectl -- exec busybox-7b57f96db7-bnzx8 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-960334 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-960334 kubectl -- exec busybox-7b57f96db7-4nd8k -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-960334 kubectl -- exec busybox-7b57f96db7-4nd8k -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-960334 kubectl -- exec busybox-7b57f96db7-8nfwl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-960334 kubectl -- exec busybox-7b57f96db7-8nfwl -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-960334 kubectl -- exec busybox-7b57f96db7-bnzx8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-960334 kubectl -- exec busybox-7b57f96db7-bnzx8 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (0.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (53.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-960334 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-960334 node add --alsologtostderr -v 5: (53.147321463s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-960334 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (53.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-960334 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (16.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-960334 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-960334 cp testdata/cp-test.txt ha-960334:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-960334 ssh -n ha-960334 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-960334 cp ha-960334:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1749313542/001/cp-test_ha-960334.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-960334 ssh -n ha-960334 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-960334 cp ha-960334:/home/docker/cp-test.txt ha-960334-m02:/home/docker/cp-test_ha-960334_ha-960334-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-960334 ssh -n ha-960334 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-960334 ssh -n ha-960334-m02 "sudo cat /home/docker/cp-test_ha-960334_ha-960334-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-960334 cp ha-960334:/home/docker/cp-test.txt ha-960334-m03:/home/docker/cp-test_ha-960334_ha-960334-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-960334 ssh -n ha-960334 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-960334 ssh -n ha-960334-m03 "sudo cat /home/docker/cp-test_ha-960334_ha-960334-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-960334 cp ha-960334:/home/docker/cp-test.txt ha-960334-m04:/home/docker/cp-test_ha-960334_ha-960334-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-960334 ssh -n ha-960334 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-960334 ssh -n ha-960334-m04 "sudo cat /home/docker/cp-test_ha-960334_ha-960334-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-960334 cp testdata/cp-test.txt ha-960334-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-960334 ssh -n ha-960334-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-960334 cp ha-960334-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1749313542/001/cp-test_ha-960334-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-960334 ssh -n ha-960334-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-960334 cp ha-960334-m02:/home/docker/cp-test.txt ha-960334:/home/docker/cp-test_ha-960334-m02_ha-960334.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-960334 ssh -n ha-960334-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-960334 ssh -n ha-960334 "sudo cat /home/docker/cp-test_ha-960334-m02_ha-960334.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-960334 cp ha-960334-m02:/home/docker/cp-test.txt ha-960334-m03:/home/docker/cp-test_ha-960334-m02_ha-960334-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-960334 ssh -n ha-960334-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-960334 ssh -n ha-960334-m03 "sudo cat /home/docker/cp-test_ha-960334-m02_ha-960334-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-960334 cp ha-960334-m02:/home/docker/cp-test.txt ha-960334-m04:/home/docker/cp-test_ha-960334-m02_ha-960334-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-960334 ssh -n ha-960334-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-960334 ssh -n ha-960334-m04 "sudo cat /home/docker/cp-test_ha-960334-m02_ha-960334-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-960334 cp testdata/cp-test.txt ha-960334-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-960334 ssh -n ha-960334-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-960334 cp ha-960334-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1749313542/001/cp-test_ha-960334-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-960334 ssh -n ha-960334-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-960334 cp ha-960334-m03:/home/docker/cp-test.txt ha-960334:/home/docker/cp-test_ha-960334-m03_ha-960334.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-960334 ssh -n ha-960334-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-960334 ssh -n ha-960334 "sudo cat /home/docker/cp-test_ha-960334-m03_ha-960334.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-960334 cp ha-960334-m03:/home/docker/cp-test.txt ha-960334-m02:/home/docker/cp-test_ha-960334-m03_ha-960334-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-960334 ssh -n ha-960334-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-960334 ssh -n ha-960334-m02 "sudo cat /home/docker/cp-test_ha-960334-m03_ha-960334-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-960334 cp ha-960334-m03:/home/docker/cp-test.txt ha-960334-m04:/home/docker/cp-test_ha-960334-m03_ha-960334-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-960334 ssh -n ha-960334-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-960334 ssh -n ha-960334-m04 "sudo cat /home/docker/cp-test_ha-960334-m03_ha-960334-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-960334 cp testdata/cp-test.txt ha-960334-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-960334 ssh -n ha-960334-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-960334 cp ha-960334-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1749313542/001/cp-test_ha-960334-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-960334 ssh -n ha-960334-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-960334 cp ha-960334-m04:/home/docker/cp-test.txt ha-960334:/home/docker/cp-test_ha-960334-m04_ha-960334.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-960334 ssh -n ha-960334-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-960334 ssh -n ha-960334 "sudo cat /home/docker/cp-test_ha-960334-m04_ha-960334.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-960334 cp ha-960334-m04:/home/docker/cp-test.txt ha-960334-m02:/home/docker/cp-test_ha-960334-m04_ha-960334-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-960334 ssh -n ha-960334-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-960334 ssh -n ha-960334-m02 "sudo cat /home/docker/cp-test_ha-960334-m04_ha-960334-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-960334 cp ha-960334-m04:/home/docker/cp-test.txt ha-960334-m03:/home/docker/cp-test_ha-960334-m04_ha-960334-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-960334 ssh -n ha-960334-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-960334 ssh -n ha-960334-m03 "sudo cat /home/docker/cp-test_ha-960334-m04_ha-960334-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (16.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (19.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-960334 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-960334 node stop m02 --alsologtostderr -v 5: (18.653581808s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-960334 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-960334 status --alsologtostderr -v 5: exit status 7 (649.150338ms)

                                                
                                                
-- stdout --
	ha-960334
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-960334-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-960334-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-960334-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 02:18:11.546368   79536 out.go:360] Setting OutFile to fd 1 ...
	I1119 02:18:11.546802   79536 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:18:11.546811   79536 out.go:374] Setting ErrFile to fd 2...
	I1119 02:18:11.546816   79536 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:18:11.546986   79536 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-11126/.minikube/bin
	I1119 02:18:11.547136   79536 out.go:368] Setting JSON to false
	I1119 02:18:11.547165   79536 mustload.go:66] Loading cluster: ha-960334
	I1119 02:18:11.547482   79536 notify.go:221] Checking for updates...
	I1119 02:18:11.548140   79536 config.go:182] Loaded profile config "ha-960334": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:18:11.548167   79536 status.go:174] checking status of ha-960334 ...
	I1119 02:18:11.549178   79536 cli_runner.go:164] Run: docker container inspect ha-960334 --format={{.State.Status}}
	I1119 02:18:11.568093   79536 status.go:371] ha-960334 host status = "Running" (err=<nil>)
	I1119 02:18:11.568119   79536 host.go:66] Checking if "ha-960334" exists ...
	I1119 02:18:11.568348   79536 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-960334
	I1119 02:18:11.584777   79536 host.go:66] Checking if "ha-960334" exists ...
	I1119 02:18:11.585027   79536 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 02:18:11.585084   79536 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-960334
	I1119 02:18:11.601309   79536 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/ha-960334/id_rsa Username:docker}
	I1119 02:18:11.692620   79536 ssh_runner.go:195] Run: systemctl --version
	I1119 02:18:11.698649   79536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 02:18:11.710219   79536 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 02:18:11.766782   79536 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-19 02:18:11.756644785 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652060160 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 02:18:11.767262   79536 kubeconfig.go:125] found "ha-960334" server: "https://192.168.49.254:8443"
	I1119 02:18:11.767288   79536 api_server.go:166] Checking apiserver status ...
	I1119 02:18:11.767320   79536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 02:18:11.778447   79536 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1245/cgroup
	W1119 02:18:11.786499   79536 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1245/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1119 02:18:11.786548   79536 ssh_runner.go:195] Run: ls
	I1119 02:18:11.789828   79536 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1119 02:18:11.793778   79536 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1119 02:18:11.793796   79536 status.go:463] ha-960334 apiserver status = Running (err=<nil>)
	I1119 02:18:11.793813   79536 status.go:176] ha-960334 status: &{Name:ha-960334 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1119 02:18:11.793843   79536 status.go:174] checking status of ha-960334-m02 ...
	I1119 02:18:11.794098   79536 cli_runner.go:164] Run: docker container inspect ha-960334-m02 --format={{.State.Status}}
	I1119 02:18:11.810661   79536 status.go:371] ha-960334-m02 host status = "Stopped" (err=<nil>)
	I1119 02:18:11.810676   79536 status.go:384] host is not running, skipping remaining checks
	I1119 02:18:11.810681   79536 status.go:176] ha-960334-m02 status: &{Name:ha-960334-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1119 02:18:11.810696   79536 status.go:174] checking status of ha-960334-m03 ...
	I1119 02:18:11.810975   79536 cli_runner.go:164] Run: docker container inspect ha-960334-m03 --format={{.State.Status}}
	I1119 02:18:11.827214   79536 status.go:371] ha-960334-m03 host status = "Running" (err=<nil>)
	I1119 02:18:11.827232   79536 host.go:66] Checking if "ha-960334-m03" exists ...
	I1119 02:18:11.827483   79536 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-960334-m03
	I1119 02:18:11.842032   79536 host.go:66] Checking if "ha-960334-m03" exists ...
	I1119 02:18:11.842282   79536 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 02:18:11.842334   79536 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-960334-m03
	I1119 02:18:11.857951   79536 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/ha-960334-m03/id_rsa Username:docker}
	I1119 02:18:11.948366   79536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 02:18:11.960387   79536 kubeconfig.go:125] found "ha-960334" server: "https://192.168.49.254:8443"
	I1119 02:18:11.960408   79536 api_server.go:166] Checking apiserver status ...
	I1119 02:18:11.960458   79536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 02:18:11.970459   79536 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1167/cgroup
	W1119 02:18:11.977768   79536 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1167/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1119 02:18:11.977812   79536 ssh_runner.go:195] Run: ls
	I1119 02:18:11.981139   79536 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1119 02:18:11.985009   79536 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1119 02:18:11.985026   79536 status.go:463] ha-960334-m03 apiserver status = Running (err=<nil>)
	I1119 02:18:11.985034   79536 status.go:176] ha-960334-m03 status: &{Name:ha-960334-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1119 02:18:11.985047   79536 status.go:174] checking status of ha-960334-m04 ...
	I1119 02:18:11.985251   79536 cli_runner.go:164] Run: docker container inspect ha-960334-m04 --format={{.State.Status}}
	I1119 02:18:12.003028   79536 status.go:371] ha-960334-m04 host status = "Running" (err=<nil>)
	I1119 02:18:12.003044   79536 host.go:66] Checking if "ha-960334-m04" exists ...
	I1119 02:18:12.003285   79536 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-960334-m04
	I1119 02:18:12.019080   79536 host.go:66] Checking if "ha-960334-m04" exists ...
	I1119 02:18:12.019324   79536 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 02:18:12.019363   79536 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-960334-m04
	I1119 02:18:12.035469   79536 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/ha-960334-m04/id_rsa Username:docker}
	I1119 02:18:12.126204   79536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 02:18:12.137912   79536 status.go:176] ha-960334-m04 status: &{Name:ha-960334-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (19.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (8.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-960334 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-960334 node start m02 --alsologtostderr -v 5: (7.729592745s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-960334 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (8.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (115.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-960334 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-960334 stop --alsologtostderr -v 5
E1119 02:18:28.328301   14634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/addons-167289/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-960334 stop --alsologtostderr -v 5: (35.153177431s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-960334 start --wait true --alsologtostderr -v 5
E1119 02:19:01.782120   14634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/functional-345998/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:19:01.788515   14634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/functional-345998/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:19:01.800692   14634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/functional-345998/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:19:01.822372   14634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/functional-345998/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:19:01.864454   14634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/functional-345998/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:19:01.945882   14634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/functional-345998/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:19:02.107405   14634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/functional-345998/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:19:02.428815   14634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/functional-345998/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:19:03.070401   14634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/functional-345998/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:19:04.351745   14634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/functional-345998/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:19:06.914182   14634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/functional-345998/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:19:12.036411   14634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/functional-345998/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:19:22.277779   14634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/functional-345998/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:19:42.759069   14634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/functional-345998/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:19:51.401404   14634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/addons-167289/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-960334 start --wait true --alsologtostderr -v 5: (1m20.701163222s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-960334 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (115.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-960334 node delete m03 --alsologtostderr -v 5
E1119 02:20:23.721593   14634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/functional-345998/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-960334 node delete m03 --alsologtostderr -v 5: (9.671039503s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-960334 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (31.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-960334 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-960334 stop --alsologtostderr -v 5: (31.441447928s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-960334 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-960334 status --alsologtostderr -v 5: exit status 7 (108.416162ms)

                                                
                                                
-- stdout --
	ha-960334
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-960334-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-960334-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 02:21:00.897790   93967 out.go:360] Setting OutFile to fd 1 ...
	I1119 02:21:00.898054   93967 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:21:00.898065   93967 out.go:374] Setting ErrFile to fd 2...
	I1119 02:21:00.898071   93967 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:21:00.898291   93967 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-11126/.minikube/bin
	I1119 02:21:00.898472   93967 out.go:368] Setting JSON to false
	I1119 02:21:00.898505   93967 mustload.go:66] Loading cluster: ha-960334
	I1119 02:21:00.898537   93967 notify.go:221] Checking for updates...
	I1119 02:21:00.898910   93967 config.go:182] Loaded profile config "ha-960334": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:21:00.898926   93967 status.go:174] checking status of ha-960334 ...
	I1119 02:21:00.899353   93967 cli_runner.go:164] Run: docker container inspect ha-960334 --format={{.State.Status}}
	I1119 02:21:00.917939   93967 status.go:371] ha-960334 host status = "Stopped" (err=<nil>)
	I1119 02:21:00.917962   93967 status.go:384] host is not running, skipping remaining checks
	I1119 02:21:00.917970   93967 status.go:176] ha-960334 status: &{Name:ha-960334 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1119 02:21:00.918013   93967 status.go:174] checking status of ha-960334-m02 ...
	I1119 02:21:00.918233   93967 cli_runner.go:164] Run: docker container inspect ha-960334-m02 --format={{.State.Status}}
	I1119 02:21:00.934839   93967 status.go:371] ha-960334-m02 host status = "Stopped" (err=<nil>)
	I1119 02:21:00.934858   93967 status.go:384] host is not running, skipping remaining checks
	I1119 02:21:00.934865   93967 status.go:176] ha-960334-m02 status: &{Name:ha-960334-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1119 02:21:00.934916   93967 status.go:174] checking status of ha-960334-m04 ...
	I1119 02:21:00.935129   93967 cli_runner.go:164] Run: docker container inspect ha-960334-m04 --format={{.State.Status}}
	I1119 02:21:00.952464   93967 status.go:371] ha-960334-m04 host status = "Stopped" (err=<nil>)
	I1119 02:21:00.952484   93967 status.go:384] host is not running, skipping remaining checks
	I1119 02:21:00.952489   93967 status.go:176] ha-960334-m04 status: &{Name:ha-960334-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (31.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (55.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-960334 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1119 02:21:45.645119   14634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/functional-345998/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-960334 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (55.225664917s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-960334 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (55.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (37.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-960334 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-960334 node add --control-plane --alsologtostderr -v 5: (36.80044085s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-960334 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (37.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.84s)

                                                
                                    
x
+
TestJSONOutput/start/Command (37.18s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-955461 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-955461 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (37.181233936s)
--- PASS: TestJSONOutput/start/Command (37.18s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.01s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-955461 --output=json --user=testUser
E1119 02:23:28.329586   14634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/addons-167289/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-955461 --output=json --user=testUser: (6.012174518s)
--- PASS: TestJSONOutput/stop/Command (6.01s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-812068 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-812068 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (71.374739ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0088292c-7c98-4259-85d2-a9e13bcc7463","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-812068] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"73d9a54d-d273-4953-a957-d85457b95fba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21924"}}
	{"specversion":"1.0","id":"6bb41b04-e43a-46d6-a579-2cc3a3390fdf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"8180169a-0077-4478-ab4a-8743f254a30c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21924-11126/kubeconfig"}}
	{"specversion":"1.0","id":"f4626c49-b48b-4d5c-802b-d69c6a85acbd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-11126/.minikube"}}
	{"specversion":"1.0","id":"09d38c94-8853-45fd-9cc7-93b9dd1fcaf9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"a7261b14-4ea4-4e61-9cc4-c85e7a0e27a2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"05f66dc9-3afa-4d46-944d-0fbc8d446d35","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-812068" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-812068
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (26.98s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-849534 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-849534 --network=: (24.89425884s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-849534" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-849534
E1119 02:24:01.783010   14634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/functional-345998/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-849534: (2.069334078s)
--- PASS: TestKicCustomNetwork/create_custom_network (26.98s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (23.59s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-389882 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-389882 --network=bridge: (21.640492577s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-389882" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-389882
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-389882: (1.925609624s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (23.59s)

                                                
                                    
x
+
TestKicExistingNetwork (23.46s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1119 02:24:26.505878   14634 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1119 02:24:26.522111   14634 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1119 02:24:26.522171   14634 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1119 02:24:26.522187   14634 cli_runner.go:164] Run: docker network inspect existing-network
W1119 02:24:26.538168   14634 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1119 02:24:26.538197   14634 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1119 02:24:26.538221   14634 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1119 02:24:26.538337   14634 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1119 02:24:26.554530   14634 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-84ce244e4c23 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:16:55:7c:db:e3:4e} reservation:<nil>}
I1119 02:24:26.554963   14634 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d1ce90}
I1119 02:24:26.554997   14634 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1119 02:24:26.555050   14634 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1119 02:24:26.599441   14634 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-465627 --network=existing-network
E1119 02:24:29.487463   14634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/functional-345998/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-465627 --network=existing-network: (21.366301307s)
helpers_test.go:175: Cleaning up "existing-network-465627" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-465627
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-465627: (1.970218236s)
I1119 02:24:49.952249   14634 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (23.46s)

                                                
                                    
x
+
TestKicCustomSubnet (27.5s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-205508 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-205508 --subnet=192.168.60.0/24: (25.39611493s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-205508 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-205508" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-205508
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-205508: (2.080628695s)
--- PASS: TestKicCustomSubnet (27.50s)

                                                
                                    
x
+
TestKicStaticIP (22.42s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-353463 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-353463 --static-ip=192.168.200.200: (20.215876295s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-353463 ip
helpers_test.go:175: Cleaning up "static-ip-353463" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-353463
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-353463: (2.073922842s)
--- PASS: TestKicStaticIP (22.42s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (45.76s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-877124 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-877124 --driver=docker  --container-runtime=crio: (19.636334578s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-880084 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-880084 --driver=docker  --container-runtime=crio: (20.365533731s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-877124
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-880084
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-880084" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-880084
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-880084: (2.299479007s)
helpers_test.go:175: Cleaning up "first-877124" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-877124
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-877124: (2.291779452s)
--- PASS: TestMinikubeProfile (45.76s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.88s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-477159 --memory=3072 --mount-string /tmp/TestMountStartserial3151322089/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-477159 --memory=3072 --mount-string /tmp/TestMountStartserial3151322089/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.884572648s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.88s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-477159 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (4.82s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-489226 --memory=3072 --mount-string /tmp/TestMountStartserial3151322089/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-489226 --memory=3072 --mount-string /tmp/TestMountStartserial3151322089/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (3.815368238s)
--- PASS: TestMountStart/serial/StartWithMountSecond (4.82s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-489226 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.63s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-477159 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-477159 --alsologtostderr -v=5: (1.630148108s)
--- PASS: TestMountStart/serial/DeleteFirst (1.63s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-489226 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-489226
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-489226: (1.239545797s)
--- PASS: TestMountStart/serial/Stop (1.24s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.43s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-489226
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-489226: (6.427261654s)
--- PASS: TestMountStart/serial/RestartStopped (7.43s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-489226 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (65.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-557513 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-557513 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (1m5.171873293s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-557513 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (65.64s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-557513 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-557513 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-557513 -- rollout status deployment/busybox: (1.853955586s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-557513 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-557513 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-557513 -- exec busybox-7b57f96db7-fjnv9 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-557513 -- exec busybox-7b57f96db7-t2vt5 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-557513 -- exec busybox-7b57f96db7-fjnv9 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-557513 -- exec busybox-7b57f96db7-t2vt5 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-557513 -- exec busybox-7b57f96db7-fjnv9 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-557513 -- exec busybox-7b57f96db7-t2vt5 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.21s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-557513 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-557513 -- exec busybox-7b57f96db7-fjnv9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-557513 -- exec busybox-7b57f96db7-fjnv9 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-557513 -- exec busybox-7b57f96db7-t2vt5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-557513 -- exec busybox-7b57f96db7-t2vt5 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.68s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (26.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-557513 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-557513 -v=5 --alsologtostderr: (25.531740725s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-557513 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (26.13s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-557513 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.62s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-557513 status --output json --alsologtostderr
E1119 02:28:28.327306   14634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/addons-167289/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-557513 cp testdata/cp-test.txt multinode-557513:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-557513 ssh -n multinode-557513 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-557513 cp multinode-557513:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2565339910/001/cp-test_multinode-557513.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-557513 ssh -n multinode-557513 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-557513 cp multinode-557513:/home/docker/cp-test.txt multinode-557513-m02:/home/docker/cp-test_multinode-557513_multinode-557513-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-557513 ssh -n multinode-557513 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-557513 ssh -n multinode-557513-m02 "sudo cat /home/docker/cp-test_multinode-557513_multinode-557513-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-557513 cp multinode-557513:/home/docker/cp-test.txt multinode-557513-m03:/home/docker/cp-test_multinode-557513_multinode-557513-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-557513 ssh -n multinode-557513 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-557513 ssh -n multinode-557513-m03 "sudo cat /home/docker/cp-test_multinode-557513_multinode-557513-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-557513 cp testdata/cp-test.txt multinode-557513-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-557513 ssh -n multinode-557513-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-557513 cp multinode-557513-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2565339910/001/cp-test_multinode-557513-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-557513 ssh -n multinode-557513-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-557513 cp multinode-557513-m02:/home/docker/cp-test.txt multinode-557513:/home/docker/cp-test_multinode-557513-m02_multinode-557513.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-557513 ssh -n multinode-557513-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-557513 ssh -n multinode-557513 "sudo cat /home/docker/cp-test_multinode-557513-m02_multinode-557513.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-557513 cp multinode-557513-m02:/home/docker/cp-test.txt multinode-557513-m03:/home/docker/cp-test_multinode-557513-m02_multinode-557513-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-557513 ssh -n multinode-557513-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-557513 ssh -n multinode-557513-m03 "sudo cat /home/docker/cp-test_multinode-557513-m02_multinode-557513-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-557513 cp testdata/cp-test.txt multinode-557513-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-557513 ssh -n multinode-557513-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-557513 cp multinode-557513-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2565339910/001/cp-test_multinode-557513-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-557513 ssh -n multinode-557513-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-557513 cp multinode-557513-m03:/home/docker/cp-test.txt multinode-557513:/home/docker/cp-test_multinode-557513-m03_multinode-557513.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-557513 ssh -n multinode-557513-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-557513 ssh -n multinode-557513 "sudo cat /home/docker/cp-test_multinode-557513-m03_multinode-557513.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-557513 cp multinode-557513-m03:/home/docker/cp-test.txt multinode-557513-m02:/home/docker/cp-test_multinode-557513-m03_multinode-557513-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-557513 ssh -n multinode-557513-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-557513 ssh -n multinode-557513-m02 "sudo cat /home/docker/cp-test_multinode-557513-m03_multinode-557513-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.24s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-557513 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-557513 node stop m03: (1.244507951s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-557513 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-557513 status: exit status 7 (465.518424ms)

                                                
                                                
-- stdout --
	multinode-557513
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-557513-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-557513-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-557513 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-557513 status --alsologtostderr: exit status 7 (467.819096ms)

                                                
                                                
-- stdout --
	multinode-557513
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-557513-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-557513-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 02:28:38.858185  153739 out.go:360] Setting OutFile to fd 1 ...
	I1119 02:28:38.858300  153739 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:28:38.858310  153739 out.go:374] Setting ErrFile to fd 2...
	I1119 02:28:38.858316  153739 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:28:38.858604  153739 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-11126/.minikube/bin
	I1119 02:28:38.858790  153739 out.go:368] Setting JSON to false
	I1119 02:28:38.858830  153739 mustload.go:66] Loading cluster: multinode-557513
	I1119 02:28:38.858911  153739 notify.go:221] Checking for updates...
	I1119 02:28:38.859250  153739 config.go:182] Loaded profile config "multinode-557513": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:28:38.859269  153739 status.go:174] checking status of multinode-557513 ...
	I1119 02:28:38.859746  153739 cli_runner.go:164] Run: docker container inspect multinode-557513 --format={{.State.Status}}
	I1119 02:28:38.877292  153739 status.go:371] multinode-557513 host status = "Running" (err=<nil>)
	I1119 02:28:38.877343  153739 host.go:66] Checking if "multinode-557513" exists ...
	I1119 02:28:38.877658  153739 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-557513
	I1119 02:28:38.894206  153739 host.go:66] Checking if "multinode-557513" exists ...
	I1119 02:28:38.894482  153739 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 02:28:38.894518  153739 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-557513
	I1119 02:28:38.910962  153739 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/multinode-557513/id_rsa Username:docker}
	I1119 02:28:39.002175  153739 ssh_runner.go:195] Run: systemctl --version
	I1119 02:28:39.008099  153739 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 02:28:39.019546  153739 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 02:28:39.076467  153739 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:65 SystemTime:2025-11-19 02:28:39.067491056 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652060160 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 02:28:39.077156  153739 kubeconfig.go:125] found "multinode-557513" server: "https://192.168.67.2:8443"
	I1119 02:28:39.077188  153739 api_server.go:166] Checking apiserver status ...
	I1119 02:28:39.077229  153739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 02:28:39.088125  153739 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1252/cgroup
	W1119 02:28:39.095784  153739 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1252/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1119 02:28:39.095827  153739 ssh_runner.go:195] Run: ls
	I1119 02:28:39.099189  153739 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1119 02:28:39.102993  153739 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1119 02:28:39.103014  153739 status.go:463] multinode-557513 apiserver status = Running (err=<nil>)
	I1119 02:28:39.103025  153739 status.go:176] multinode-557513 status: &{Name:multinode-557513 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1119 02:28:39.103043  153739 status.go:174] checking status of multinode-557513-m02 ...
	I1119 02:28:39.103273  153739 cli_runner.go:164] Run: docker container inspect multinode-557513-m02 --format={{.State.Status}}
	I1119 02:28:39.120128  153739 status.go:371] multinode-557513-m02 host status = "Running" (err=<nil>)
	I1119 02:28:39.120153  153739 host.go:66] Checking if "multinode-557513-m02" exists ...
	I1119 02:28:39.120362  153739 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-557513-m02
	I1119 02:28:39.136791  153739 host.go:66] Checking if "multinode-557513-m02" exists ...
	I1119 02:28:39.137049  153739 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 02:28:39.137085  153739 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-557513-m02
	I1119 02:28:39.152954  153739 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/21924-11126/.minikube/machines/multinode-557513-m02/id_rsa Username:docker}
	I1119 02:28:39.242014  153739 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 02:28:39.253403  153739 status.go:176] multinode-557513-m02 status: &{Name:multinode-557513-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1119 02:28:39.253443  153739 status.go:174] checking status of multinode-557513-m03 ...
	I1119 02:28:39.253690  153739 cli_runner.go:164] Run: docker container inspect multinode-557513-m03 --format={{.State.Status}}
	I1119 02:28:39.271042  153739 status.go:371] multinode-557513-m03 host status = "Stopped" (err=<nil>)
	I1119 02:28:39.271058  153739 status.go:384] host is not running, skipping remaining checks
	I1119 02:28:39.271063  153739 status.go:176] multinode-557513-m03 status: &{Name:multinode-557513-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.18s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-557513 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-557513 node start m03 -v=5 --alsologtostderr: (6.369128484s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-557513 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.03s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (84.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-557513
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-557513
E1119 02:29:01.783810   14634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/functional-345998/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-557513: (31.758798968s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-557513 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-557513 --wait=true -v=5 --alsologtostderr: (52.217239086s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-557513
--- PASS: TestMultiNode/serial/RestartKeepsNodes (84.09s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-557513 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-557513 node delete m03: (4.572586357s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-557513 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.13s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (28.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-557513 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-557513 stop: (28.329927456s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-557513 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-557513 status: exit status 7 (98.057761ms)

                                                
                                                
-- stdout --
	multinode-557513
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-557513-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-557513 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-557513 status --alsologtostderr: exit status 7 (92.428593ms)

                                                
                                                
-- stdout --
	multinode-557513
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-557513-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 02:30:44.008568  163592 out.go:360] Setting OutFile to fd 1 ...
	I1119 02:30:44.008672  163592 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:30:44.008684  163592 out.go:374] Setting ErrFile to fd 2...
	I1119 02:30:44.008690  163592 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:30:44.008863  163592 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-11126/.minikube/bin
	I1119 02:30:44.009044  163592 out.go:368] Setting JSON to false
	I1119 02:30:44.009074  163592 mustload.go:66] Loading cluster: multinode-557513
	I1119 02:30:44.009163  163592 notify.go:221] Checking for updates...
	I1119 02:30:44.009414  163592 config.go:182] Loaded profile config "multinode-557513": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:30:44.009426  163592 status.go:174] checking status of multinode-557513 ...
	I1119 02:30:44.009850  163592 cli_runner.go:164] Run: docker container inspect multinode-557513 --format={{.State.Status}}
	I1119 02:30:44.028990  163592 status.go:371] multinode-557513 host status = "Stopped" (err=<nil>)
	I1119 02:30:44.029013  163592 status.go:384] host is not running, skipping remaining checks
	I1119 02:30:44.029020  163592 status.go:176] multinode-557513 status: &{Name:multinode-557513 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1119 02:30:44.029054  163592 status.go:174] checking status of multinode-557513-m02 ...
	I1119 02:30:44.029381  163592 cli_runner.go:164] Run: docker container inspect multinode-557513-m02 --format={{.State.Status}}
	I1119 02:30:44.046946  163592 status.go:371] multinode-557513-m02 host status = "Stopped" (err=<nil>)
	I1119 02:30:44.046963  163592 status.go:384] host is not running, skipping remaining checks
	I1119 02:30:44.046968  163592 status.go:176] multinode-557513-m02 status: &{Name:multinode-557513-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (28.52s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (28.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-557513 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-557513 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (27.705937365s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-557513 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (28.27s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (23.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-557513
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-557513-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-557513-m02 --driver=docker  --container-runtime=crio: exit status 14 (69.48178ms)

                                                
                                                
-- stdout --
	* [multinode-557513-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21924
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21924-11126/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-11126/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-557513-m02' is duplicated with machine name 'multinode-557513-m02' in profile 'multinode-557513'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-557513-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-557513-m03 --driver=docker  --container-runtime=crio: (20.972098132s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-557513
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-557513: exit status 80 (276.421586ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-557513 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-557513-m03 already exists in multinode-557513-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-557513-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-557513-m03: (2.298253658s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (23.67s)

                                                
                                    
x
+
TestPreload (109.95s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-191769 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-191769 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (48.022485639s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-191769 image pull gcr.io/k8s-minikube/busybox
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-191769
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-191769: (5.816653883s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-191769 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-191769 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (52.589313659s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-191769 image list
helpers_test.go:175: Cleaning up "test-preload-191769" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-191769
E1119 02:33:28.327811   14634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/addons-167289/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-191769: (2.343415555s)
--- PASS: TestPreload (109.95s)

                                                
                                    
x
+
TestScheduledStopUnix (94.22s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-693027 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-693027 --memory=3072 --driver=docker  --container-runtime=crio: (18.881829016s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-693027 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1119 02:33:48.985297  180557 out.go:360] Setting OutFile to fd 1 ...
	I1119 02:33:48.985601  180557 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:33:48.985612  180557 out.go:374] Setting ErrFile to fd 2...
	I1119 02:33:48.985618  180557 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:33:48.985808  180557 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-11126/.minikube/bin
	I1119 02:33:48.986051  180557 out.go:368] Setting JSON to false
	I1119 02:33:48.986158  180557 mustload.go:66] Loading cluster: scheduled-stop-693027
	I1119 02:33:48.986508  180557 config.go:182] Loaded profile config "scheduled-stop-693027": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:33:48.986597  180557 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/scheduled-stop-693027/config.json ...
	I1119 02:33:48.986791  180557 mustload.go:66] Loading cluster: scheduled-stop-693027
	I1119 02:33:48.986924  180557 config.go:182] Loaded profile config "scheduled-stop-693027": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-693027 -n scheduled-stop-693027
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-693027 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1119 02:33:49.349107  180709 out.go:360] Setting OutFile to fd 1 ...
	I1119 02:33:49.349413  180709 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:33:49.349427  180709 out.go:374] Setting ErrFile to fd 2...
	I1119 02:33:49.349452  180709 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:33:49.349888  180709 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-11126/.minikube/bin
	I1119 02:33:49.350552  180709 out.go:368] Setting JSON to false
	I1119 02:33:49.350794  180709 daemonize_unix.go:73] killing process 180591 as it is an old scheduled stop
	I1119 02:33:49.350906  180709 mustload.go:66] Loading cluster: scheduled-stop-693027
	I1119 02:33:49.351338  180709 config.go:182] Loaded profile config "scheduled-stop-693027": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:33:49.351450  180709 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/scheduled-stop-693027/config.json ...
	I1119 02:33:49.351689  180709 mustload.go:66] Loading cluster: scheduled-stop-693027
	I1119 02:33:49.351836  180709 config.go:182] Loaded profile config "scheduled-stop-693027": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1119 02:33:49.355671   14634 retry.go:31] will retry after 122.732µs: open /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/scheduled-stop-693027/pid: no such file or directory
I1119 02:33:49.356834   14634 retry.go:31] will retry after 160.308µs: open /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/scheduled-stop-693027/pid: no such file or directory
I1119 02:33:49.357957   14634 retry.go:31] will retry after 335.733µs: open /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/scheduled-stop-693027/pid: no such file or directory
I1119 02:33:49.359082   14634 retry.go:31] will retry after 195.378µs: open /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/scheduled-stop-693027/pid: no such file or directory
I1119 02:33:49.360201   14634 retry.go:31] will retry after 727.054µs: open /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/scheduled-stop-693027/pid: no such file or directory
I1119 02:33:49.361329   14634 retry.go:31] will retry after 930.949µs: open /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/scheduled-stop-693027/pid: no such file or directory
I1119 02:33:49.362483   14634 retry.go:31] will retry after 839.59µs: open /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/scheduled-stop-693027/pid: no such file or directory
I1119 02:33:49.363611   14634 retry.go:31] will retry after 2.064635ms: open /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/scheduled-stop-693027/pid: no such file or directory
I1119 02:33:49.365754   14634 retry.go:31] will retry after 1.546105ms: open /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/scheduled-stop-693027/pid: no such file or directory
I1119 02:33:49.367950   14634 retry.go:31] will retry after 2.643482ms: open /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/scheduled-stop-693027/pid: no such file or directory
I1119 02:33:49.371193   14634 retry.go:31] will retry after 4.507859ms: open /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/scheduled-stop-693027/pid: no such file or directory
I1119 02:33:49.376391   14634 retry.go:31] will retry after 6.186402ms: open /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/scheduled-stop-693027/pid: no such file or directory
I1119 02:33:49.383704   14634 retry.go:31] will retry after 6.683652ms: open /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/scheduled-stop-693027/pid: no such file or directory
I1119 02:33:49.391025   14634 retry.go:31] will retry after 14.368152ms: open /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/scheduled-stop-693027/pid: no such file or directory
I1119 02:33:49.406222   14634 retry.go:31] will retry after 42.24041ms: open /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/scheduled-stop-693027/pid: no such file or directory
I1119 02:33:49.449470   14634 retry.go:31] will retry after 44.536916ms: open /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/scheduled-stop-693027/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-693027 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
E1119 02:34:01.785207   14634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/functional-345998/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-693027 -n scheduled-stop-693027
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-693027
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-693027 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1119 02:34:15.198446  181350 out.go:360] Setting OutFile to fd 1 ...
	I1119 02:34:15.198532  181350 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:34:15.198540  181350 out.go:374] Setting ErrFile to fd 2...
	I1119 02:34:15.198553  181350 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:34:15.198743  181350 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-11126/.minikube/bin
	I1119 02:34:15.198952  181350 out.go:368] Setting JSON to false
	I1119 02:34:15.199025  181350 mustload.go:66] Loading cluster: scheduled-stop-693027
	I1119 02:34:15.199306  181350 config.go:182] Loaded profile config "scheduled-stop-693027": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:34:15.199365  181350 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/scheduled-stop-693027/config.json ...
	I1119 02:34:15.199586  181350 mustload.go:66] Loading cluster: scheduled-stop-693027
	I1119 02:34:15.199683  181350 config.go:182] Loaded profile config "scheduled-stop-693027": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-693027
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-693027: exit status 7 (76.219038ms)

                                                
                                                
-- stdout --
	scheduled-stop-693027
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-693027 -n scheduled-stop-693027
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-693027 -n scheduled-stop-693027: exit status 7 (72.721845ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-693027" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-693027
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-693027: (3.91302113s)
--- PASS: TestScheduledStopUnix (94.22s)

                                                
                                    
x
+
TestInsufficientStorage (12.29s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-609759 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-609759 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (9.903021148s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a569a3f9-81c6-4025-95a7-2b22cc35955e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-609759] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"89a80a2d-8f2a-473f-86cc-81eec74dec12","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21924"}}
	{"specversion":"1.0","id":"d398dbd3-a28d-4683-ab50-7069c4a6500e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"39d25302-001e-4ef3-bcf9-f554d5cabfb5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21924-11126/kubeconfig"}}
	{"specversion":"1.0","id":"9772c378-b0a1-4593-804c-84db5b45559b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-11126/.minikube"}}
	{"specversion":"1.0","id":"25d33341-495f-4ae3-b7ae-134751f48c54","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"cc6d29c1-55a9-4a3b-bc9f-8a3db9235495","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"dc0c41e2-7ee6-4f6f-9963-217600fd5db7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"8cfacf2a-b04f-40ac-843a-e41ffc811635","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"0d6f6f06-192c-4e7f-9a6f-1eff736aa7ce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"609dfede-3f22-4887-98c7-2967e92fadb7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"e7840668-1030-49f8-9251-89471281ff73","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-609759\" primary control-plane node in \"insufficient-storage-609759\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"93a9e984-cffa-4788-b727-b11547a7c9af","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1763507788-21924 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"2b697d17-817b-4d30-8c6a-4b9eb626eb37","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"ae958ffa-51b1-416b-a730-deee77d9e324","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-609759 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-609759 --output=json --layout=cluster: exit status 7 (276.752853ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-609759","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-609759","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1119 02:35:14.433473  183893 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-609759" does not appear in /home/jenkins/minikube-integration/21924-11126/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-609759 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-609759 --output=json --layout=cluster: exit status 7 (273.58327ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-609759","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-609759","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1119 02:35:14.707569  184005 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-609759" does not appear in /home/jenkins/minikube-integration/21924-11126/kubeconfig
	E1119 02:35:14.717511  184005 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/insufficient-storage-609759/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-609759" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-609759
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-609759: (1.83688061s)
--- PASS: TestInsufficientStorage (12.29s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (48.3s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.1525486990 start -p running-upgrade-001803 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.1525486990 start -p running-upgrade-001803 --memory=3072 --vm-driver=docker  --container-runtime=crio: (22.898410173s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-001803 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1119 02:39:01.782793   14634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/functional-345998/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-001803 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (22.376082458s)
helpers_test.go:175: Cleaning up "running-upgrade-001803" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-001803
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-001803: (2.523578972s)
--- PASS: TestRunningBinaryUpgrade (48.30s)

                                                
                                    
x
+
TestKubernetesUpgrade (300.32s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-284802 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-284802 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (22.44016634s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-284802
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-284802: (2.153580665s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-284802 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-284802 status --format={{.Host}}: exit status 7 (76.19999ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-284802 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-284802 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m26.683940541s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-284802 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-284802 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-284802 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (107.439919ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-284802] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21924
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21924-11126/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-11126/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-284802
	    minikube start -p kubernetes-upgrade-284802 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2848022 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-284802 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-284802 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-284802 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (5.913516933s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-284802" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-284802
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-284802: (2.85506785s)
--- PASS: TestKubernetesUpgrade (300.32s)

                                                
                                    
x
+
TestMissingContainerUpgrade (110.44s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.1482250855 start -p missing-upgrade-577121 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.1482250855 start -p missing-upgrade-577121 --memory=3072 --driver=docker  --container-runtime=crio: (1m7.247265318s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-577121
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-577121: (1.818134996s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-577121
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-577121 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-577121 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (35.851359194s)
helpers_test.go:175: Cleaning up "missing-upgrade-577121" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-577121
E1119 02:38:28.327883   14634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/addons-167289/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-577121: (5.012666333s)
--- PASS: TestMissingContainerUpgrade (110.44s)

                                                
                                    
x
+
TestPause/serial/Start (54.26s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-881232 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-881232 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (54.260314472s)
--- PASS: TestPause/serial/Start (54.26s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.48s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-881232 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-881232 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (6.468747906s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-358955 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-358955 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (73.571323ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-358955] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21924
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21924-11126/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-11126/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (22.51s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-358955 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-358955 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (22.198631661s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-358955 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (22.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-001617 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-001617 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (157.959971ms)

                                                
                                                
-- stdout --
	* [false-001617] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21924
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21924-11126/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-11126/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 02:36:30.950423  207033 out.go:360] Setting OutFile to fd 1 ...
	I1119 02:36:30.950706  207033 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:36:30.950716  207033 out.go:374] Setting ErrFile to fd 2...
	I1119 02:36:30.950720  207033 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:36:30.950924  207033 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-11126/.minikube/bin
	I1119 02:36:30.951355  207033 out.go:368] Setting JSON to false
	I1119 02:36:30.952416  207033 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4738,"bootTime":1763515053,"procs":316,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 02:36:30.952528  207033 start.go:143] virtualization: kvm guest
	I1119 02:36:30.954384  207033 out.go:179] * [false-001617] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1119 02:36:30.955625  207033 out.go:179]   - MINIKUBE_LOCATION=21924
	I1119 02:36:30.955653  207033 notify.go:221] Checking for updates...
	I1119 02:36:30.957699  207033 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 02:36:30.958697  207033 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21924-11126/kubeconfig
	I1119 02:36:30.959644  207033 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-11126/.minikube
	I1119 02:36:30.960569  207033 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1119 02:36:30.961616  207033 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 02:36:30.963069  207033 config.go:182] Loaded profile config "NoKubernetes-358955": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:36:30.963176  207033 config.go:182] Loaded profile config "cert-expiration-455061": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:36:30.963285  207033 config.go:182] Loaded profile config "offline-crio-852644": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:36:30.963398  207033 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 02:36:30.986883  207033 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1119 02:36:30.986952  207033 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 02:36:31.046677  207033 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-19 02:36:31.037308885 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652060160 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 02:36:31.046773  207033 docker.go:319] overlay module found
	I1119 02:36:31.048378  207033 out.go:179] * Using the docker driver based on user configuration
	I1119 02:36:31.049353  207033 start.go:309] selected driver: docker
	I1119 02:36:31.049366  207033 start.go:930] validating driver "docker" against <nil>
	I1119 02:36:31.049377  207033 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 02:36:31.050954  207033 out.go:203] 
	W1119 02:36:31.051849  207033 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1119 02:36:31.052761  207033 out.go:203] 

                                                
                                                
** /stderr **
E1119 02:36:31.403330   14634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/addons-167289/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:88: 
----------------------- debugLogs start: false-001617 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-001617

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-001617

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-001617

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-001617

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-001617

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-001617

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-001617

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-001617

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-001617

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-001617

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-001617"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-001617"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-001617"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-001617

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-001617"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-001617"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-001617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-001617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-001617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-001617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-001617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-001617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-001617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-001617" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-001617"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-001617"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-001617"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-001617"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-001617"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-001617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-001617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-001617" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-001617"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-001617"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-001617"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-001617"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-001617"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21924-11126/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 19 Nov 2025 02:36:11 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: cert-expiration-455061
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21924-11126/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 19 Nov 2025 02:35:48 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: offline-crio-852644
contexts:
- context:
cluster: cert-expiration-455061
extensions:
- extension:
last-update: Wed, 19 Nov 2025 02:36:11 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-455061
name: cert-expiration-455061
- context:
cluster: offline-crio-852644
extensions:
- extension:
last-update: Wed, 19 Nov 2025 02:35:48 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: offline-crio-852644
name: offline-crio-852644
current-context: ""
kind: Config
users:
- name: cert-expiration-455061
user:
client-certificate: /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/cert-expiration-455061/client.crt
client-key: /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/cert-expiration-455061/client.key
- name: offline-crio-852644
user:
client-certificate: /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/offline-crio-852644/client.crt
client-key: /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/offline-crio-852644/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-001617

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-001617"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-001617"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-001617"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-001617"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-001617"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-001617"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-001617"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-001617"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-001617"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-001617"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-001617"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-001617"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-001617"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-001617"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-001617"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-001617"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-001617"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-001617"

                                                
                                                
----------------------- debugLogs end: false-001617 [took: 3.156194638s] --------------------------------
helpers_test.go:175: Cleaning up "false-001617" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-001617
--- PASS: TestNetworkPlugins/group/false (3.71s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.62s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.62s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (104s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.1736792920 start -p stopped-upgrade-593217 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.1736792920 start -p stopped-upgrade-593217 --memory=3072 --vm-driver=docker  --container-runtime=crio: (1m12.114789906s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.1736792920 -p stopped-upgrade-593217 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.1736792920 -p stopped-upgrade-593217 stop: (11.913437553s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-593217 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-593217 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (19.973787104s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (104.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (29.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-358955 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-358955 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (27.160965463s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-358955 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-358955 status -o json: exit status 2 (301.542747ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-358955","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-358955
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-358955: (2.007159809s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (29.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.86s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-358955 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-358955 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (6.857212079s)
--- PASS: TestNoKubernetes/serial/Start (6.86s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/21924-11126/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-358955 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-358955 "sudo systemctl is-active --quiet service kubelet": exit status 1 (305.467554ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.71s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-358955
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-358955: (1.708747209s)
--- PASS: TestNoKubernetes/serial/Stop (1.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (10.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-358955 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-358955 --driver=docker  --container-runtime=crio: (10.213946528s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (10.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-358955 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-358955 "sudo systemctl is-active --quiet service kubelet": exit status 1 (292.793532ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.97s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-593217
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (36.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-001617 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-001617 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (36.891907135s)
--- PASS: TestNetworkPlugins/group/auto/Start (36.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-001617 "pgrep -a kubelet"
I1119 02:39:06.971657   14634 config.go:182] Loaded profile config "auto-001617": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-001617 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-l4jfm" [f87862c8-6691-49df-ab04-e24effb23bad] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-l4jfm" [f87862c8-6691-49df-ab04-e24effb23bad] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.003922819s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (74.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-001617 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-001617 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m14.685117309s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (74.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-001617 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-001617 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-001617 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (50.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-001617 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-001617 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (50.317449907s)
--- PASS: TestNetworkPlugins/group/calico/Start (50.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (45.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-001617 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-001617 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (45.714834926s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (45.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-gds5x" [08dd3e1c-cd9c-4c3d-a442-cbe208b2942a] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-gds5x" [08dd3e1c-cd9c-4c3d-a442-cbe208b2942a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.053952355s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-001617 "pgrep -a kubelet"
I1119 02:40:18.537695   14634 config.go:182] Loaded profile config "calico-001617": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (8.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-001617 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-5wwsq" [18c96397-88ea-4798-b410-1a5f9b5d699f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-5wwsq" [18c96397-88ea-4798-b410-1a5f9b5d699f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 8.00435606s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (8.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-001617 "pgrep -a kubelet"
I1119 02:40:22.537750   14634 config.go:182] Loaded profile config "custom-flannel-001617": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (8.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-001617 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-rjqrh" [4e9c63d3-7e66-46e8-bf98-169513dd0d54] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-rjqrh" [4e9c63d3-7e66-46e8-bf98-169513dd0d54] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 8.003877404s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (8.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-9kxcd" [9b490543-b4ab-4091-ac95-94535789b504] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003529919s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-001617 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-001617 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-001617 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-001617 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-001617 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-001617 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-001617 "pgrep -a kubelet"
I1119 02:40:32.012956   14634 config.go:182] Loaded profile config "kindnet-001617": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-001617 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-4bfjw" [abaabd74-ce38-4cbb-ac90-2b8951f5a4ea] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-4bfjw" [abaabd74-ce38-4cbb-ac90-2b8951f5a4ea] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.003266763s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-001617 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-001617 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-001617 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (73.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-001617 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-001617 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m13.360312965s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (73.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (46.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-001617 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-001617 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (46.672772319s)
--- PASS: TestNetworkPlugins/group/flannel/Start (46.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (67.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-001617 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-001617 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m7.44529814s)
--- PASS: TestNetworkPlugins/group/bridge/Start (67.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-mdv6w" [21e2c302-21b4-491b-9e66-b0f3cc671ed5] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00325067s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-001617 "pgrep -a kubelet"
I1119 02:41:46.136018   14634 config.go:182] Loaded profile config "flannel-001617": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-001617 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-4f79j" [4afc9db1-f7d7-4fcb-ac2f-70ed2f9ba6d9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-4f79j" [4afc9db1-f7d7-4fcb-ac2f-70ed2f9ba6d9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.003301345s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-001617 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-001617 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-001617 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-001617 "pgrep -a kubelet"
I1119 02:42:00.561519   14634 config.go:182] Loaded profile config "enable-default-cni-001617": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-001617 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-kkc2f" [6e9b675a-ad25-4c2f-9d6d-d2039b30cf9b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-kkc2f" [6e9b675a-ad25-4c2f-9d6d-d2039b30cf9b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 8.00381967s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-001617 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-001617 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-001617 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-001617 "pgrep -a kubelet"
I1119 02:42:10.225937   14634 config.go:182] Loaded profile config "bridge-001617": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-001617 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-fwtzc" [fd1b5995-03ba-410c-8e25-ad58f8c03eff] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-fwtzc" [fd1b5995-03ba-410c-8e25-ad58f8c03eff] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.004247698s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (55.6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-987573 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-987573 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (55.603277552s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (55.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-001617 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-001617 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-001617 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (59.65s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-837474 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-837474 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (59.650794868s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (59.65s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (46.57s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-811173 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-811173 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (46.571235497s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (46.57s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (44.63s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-167150 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-167150 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (44.633836705s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (44.63s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (7.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-987573 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [9c204876-422a-41f9-9047-80e08d35da45] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [9c204876-422a-41f9-9047-80e08d35da45] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 7.003738165s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-987573 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (7.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-811173 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [e73ec6be-f0d4-46e6-8113-18b6d64163b1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [e73ec6be-f0d4-46e6-8113-18b6d64163b1] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.003551771s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-811173 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (16.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-987573 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-987573 --alsologtostderr -v=3: (16.093480279s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (16.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (7.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-167150 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [08eabde5-9057-44c1-9c3d-ee7388fc4224] Pending
helpers_test.go:352: "busybox" [08eabde5-9057-44c1-9c3d-ee7388fc4224] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1119 02:43:28.327378   14634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/addons-167289/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [08eabde5-9057-44c1-9c3d-ee7388fc4224] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 7.003233584s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-167150 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (7.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (7.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-837474 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [7c27bd17-157a-4f48-89a3-960cbf7e1a9c] Pending
helpers_test.go:352: "busybox" [7c27bd17-157a-4f48-89a3-960cbf7e1a9c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [7c27bd17-157a-4f48-89a3-960cbf7e1a9c] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 7.004072021s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-837474 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (7.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (18.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-811173 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-811173 --alsologtostderr -v=3: (18.089993263s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (18.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (18.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-167150 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-167150 --alsologtostderr -v=3: (18.029057375s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (18.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-987573 -n old-k8s-version-987573
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-987573 -n old-k8s-version-987573: exit status 7 (79.68088ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-987573 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (26.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-987573 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-987573 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (25.914489289s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-987573 -n old-k8s-version-987573
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (26.46s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (16.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-837474 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-837474 --alsologtostderr -v=3: (16.268497174s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (16.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-811173 -n embed-certs-811173
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-811173 -n embed-certs-811173: exit status 7 (76.827562ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-811173 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (47.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-811173 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-811173 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (46.881774601s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-811173 -n embed-certs-811173
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (47.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-167150 -n default-k8s-diff-port-167150
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-167150 -n default-k8s-diff-port-167150: exit status 7 (75.694438ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-167150 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (48.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-167150 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-167150 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (48.011061089s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-167150 -n default-k8s-diff-port-167150
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (48.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-837474 -n no-preload-837474
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-837474 -n no-preload-837474: exit status 7 (84.621825ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-837474 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (52.91s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-837474 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1119 02:44:01.782482   14634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/functional-345998/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-837474 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (52.540277312s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-837474 -n no-preload-837474
E1119 02:44:48.135722   14634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/auto-001617/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (52.91s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (11.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-mshqj" [5857a23b-a4e9-46c7-8df9-28cdb04e7452] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1119 02:44:07.157941   14634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/auto-001617/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:44:07.164490   14634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/auto-001617/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:44:07.175901   14634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/auto-001617/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:44:07.197465   14634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/auto-001617/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:44:07.238854   14634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/auto-001617/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:44:07.320601   14634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/auto-001617/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:44:07.482341   14634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/auto-001617/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:44:07.804347   14634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/auto-001617/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:44:08.447571   14634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/auto-001617/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-mshqj" [5857a23b-a4e9-46c7-8df9-28cdb04e7452] Running
E1119 02:44:09.728952   14634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/auto-001617/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:44:12.290234   14634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/auto-001617/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 11.005784181s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (11.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-mshqj" [5857a23b-a4e9-46c7-8df9-28cdb04e7452] Running
E1119 02:44:17.411872   14634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/auto-001617/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003832173s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-987573 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-987573 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (26.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-956139 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-956139 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (26.372924885s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (26.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-22wsb" [4b50cb2c-20ed-4a89-880a-37e815c7e447] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003276336s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-22wsb" [4b50cb2c-20ed-4a89-880a-37e815c7e447] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003411969s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-811173 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-p96nm" [6f86096a-5658-426d-b3dc-6edeb5e215e9] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003517837s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-811173 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-8rhqr" [875975d2-e3f6-411f-88f9-8c4fa8628e09] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003109994s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-p96nm" [6f86096a-5658-426d-b3dc-6edeb5e215e9] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003979076s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-167150 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-167150 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-8rhqr" [875975d2-e3f6-411f-88f9-8c4fa8628e09] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.002948485s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-837474 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.52s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-956139 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-956139 --alsologtostderr -v=3: (2.519148972s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.52s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-837474 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-956139 -n newest-cni-956139
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-956139 -n newest-cni-956139: exit status 7 (86.470578ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-956139 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (11.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-956139 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-956139 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (11.059451259s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-956139 -n newest-cni-956139
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (11.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-956139 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    

Test skip (27/328)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-001617 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-001617

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-001617

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-001617

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-001617

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-001617

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-001617

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-001617

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-001617

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-001617

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-001617

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-001617"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-001617"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-001617"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-001617

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-001617"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-001617"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-001617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-001617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-001617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-001617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-001617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-001617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-001617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-001617" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-001617"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-001617"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-001617"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-001617"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-001617"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-001617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-001617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-001617" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-001617"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-001617"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-001617"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-001617"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-001617"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21924-11126/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 19 Nov 2025 02:36:11 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: cert-expiration-455061
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21924-11126/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 19 Nov 2025 02:35:48 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: offline-crio-852644
contexts:
- context:
cluster: cert-expiration-455061
extensions:
- extension:
last-update: Wed, 19 Nov 2025 02:36:11 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-455061
name: cert-expiration-455061
- context:
cluster: offline-crio-852644
extensions:
- extension:
last-update: Wed, 19 Nov 2025 02:35:48 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: offline-crio-852644
name: offline-crio-852644
current-context: ""
kind: Config
users:
- name: cert-expiration-455061
user:
client-certificate: /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/cert-expiration-455061/client.crt
client-key: /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/cert-expiration-455061/client.key
- name: offline-crio-852644
user:
client-certificate: /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/offline-crio-852644/client.crt
client-key: /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/offline-crio-852644/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-001617

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-001617"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-001617"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-001617"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-001617"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-001617"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-001617"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-001617"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-001617"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-001617"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-001617"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-001617"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-001617"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-001617"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-001617"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-001617"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-001617"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-001617"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-001617"

                                                
                                                
----------------------- debugLogs end: kubenet-001617 [took: 3.199285624s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-001617" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-001617
--- SKIP: TestNetworkPlugins/group/kubenet (3.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-001617 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-001617

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-001617

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-001617

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-001617

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-001617

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-001617

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-001617

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-001617

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-001617

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-001617

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-001617"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-001617"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-001617"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-001617

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-001617"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-001617"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-001617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-001617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-001617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-001617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-001617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-001617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-001617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-001617" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-001617"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-001617"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-001617"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-001617"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-001617"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-001617

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-001617

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-001617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-001617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-001617

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-001617

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-001617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-001617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-001617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-001617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-001617" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-001617"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-001617"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-001617"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-001617"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-001617"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21924-11126/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 19 Nov 2025 02:36:11 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: cert-expiration-455061
contexts:
- context:
cluster: cert-expiration-455061
extensions:
- extension:
last-update: Wed, 19 Nov 2025 02:36:11 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-455061
name: cert-expiration-455061
current-context: ""
kind: Config
users:
- name: cert-expiration-455061
user:
client-certificate: /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/cert-expiration-455061/client.crt
client-key: /home/jenkins/minikube-integration/21924-11126/.minikube/profiles/cert-expiration-455061/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-001617

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-001617"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-001617"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-001617"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-001617"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-001617"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-001617"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-001617"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-001617"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-001617"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-001617"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-001617"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-001617"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-001617"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-001617"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-001617"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-001617"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-001617"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-001617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-001617"

                                                
                                                
----------------------- debugLogs end: cilium-001617 [took: 4.54121485s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-001617" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-001617
--- SKIP: TestNetworkPlugins/group/cilium (4.75s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-682232" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-682232
--- SKIP: TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                    
Copied to clipboard